url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/7387
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7387/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7387/comments
|
https://api.github.com/repos/ollama/ollama/issues/7387/events
|
https://github.com/ollama/ollama/issues/7387
| 2,616,920,983
|
I_kwDOJ0Z1Ps6b-wuX
| 7,387
|
Ollama Docker Container Disk Usage Growing Exponentially
|
{
"login": "vanduc2514",
"id": 52255055,
"node_id": "MDQ6VXNlcjUyMjU1MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/52255055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vanduc2514",
"html_url": "https://github.com/vanduc2514",
"followers_url": "https://api.github.com/users/vanduc2514/followers",
"following_url": "https://api.github.com/users/vanduc2514/following{/other_user}",
"gists_url": "https://api.github.com/users/vanduc2514/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vanduc2514/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vanduc2514/subscriptions",
"organizations_url": "https://api.github.com/users/vanduc2514/orgs",
"repos_url": "https://api.github.com/users/vanduc2514/repos",
"events_url": "https://api.github.com/users/vanduc2514/events{/privacy}",
"received_events_url": "https://api.github.com/users/vanduc2514/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-10-27T22:42:58
| 2024-11-05T20:51:15
| 2024-11-05T20:50:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've observed significant disk usage growth within the Ollama Docker container, even when using external volume for model storage. Initially, the container occupies approximately 2GB of overlay2 disk space. However, after processing a few prompts with different models, the disk usage increases rapidly, often exceeding 5 times the initial size.
It seems like during running, ollama writes something to `/tmp` which increases the disk space
Container ENVs
```shell
OLLAMA_MODELS: /root/.ollama/models
OLLAMA_NUM_PARALLEL: 2
OLLAMA_FLASH_ATTENTION: true
OLLAMA_NOHISTORY: true
```
Here is the disk usage graph in my homelab
<img width="804" alt="image" src="https://github.com/user-attachments/assets/4ccf5d85-d907-4394-abb2-c711fb1c5d6d">
### OS
WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.8
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7387/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2484
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2484/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2484/comments
|
https://api.github.com/repos/ollama/ollama/issues/2484/events
|
https://github.com/ollama/ollama/pull/2484
| 2,133,408,417
|
PR_kwDOJ0Z1Ps5mz0TT
| 2,484
|
set `shutting_down` to `false` once shutdown is complete
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-14T01:44:48
| 2024-02-14T01:48:42
| 2024-02-14T01:48:42
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2484",
"html_url": "https://github.com/ollama/ollama/pull/2484",
"diff_url": "https://github.com/ollama/ollama/pull/2484.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2484.patch",
"merged_at": "2024-02-14T01:48:42"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2484/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4371
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4371/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4371/comments
|
https://api.github.com/repos/ollama/ollama/issues/4371/events
|
https://github.com/ollama/ollama/issues/4371
| 2,291,252,516
|
I_kwDOJ0Z1Ps6Ikb0k
| 4,371
|
Feature request: ollama pull xxx (and run), add the feature of the limit download speed
|
{
"login": "edwin2jiang",
"id": 48372256,
"node_id": "MDQ6VXNlcjQ4MzcyMjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/48372256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edwin2jiang",
"html_url": "https://github.com/edwin2jiang",
"followers_url": "https://api.github.com/users/edwin2jiang/followers",
"following_url": "https://api.github.com/users/edwin2jiang/following{/other_user}",
"gists_url": "https://api.github.com/users/edwin2jiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edwin2jiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edwin2jiang/subscriptions",
"organizations_url": "https://api.github.com/users/edwin2jiang/orgs",
"repos_url": "https://api.github.com/users/edwin2jiang/repos",
"events_url": "https://api.github.com/users/edwin2jiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/edwin2jiang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-05-12T09:31:28
| 2024-05-23T20:08:25
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
## Why do we need this feature?
When I use `ollama run llama3`, my full network is occupied by this download task. But by the time I want to use my PC to visit other website, I can't even open any website.
## What will be like:
the command-line api may be like this:
```
ollama run llama3 --speed=500
```
it represents that the downloading speed is limited to 500 KB/s.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4371/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/4371/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3034
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3034/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3034/comments
|
https://api.github.com/repos/ollama/ollama/issues/3034/events
|
https://github.com/ollama/ollama/pull/3034
| 2,177,524,795
|
PR_kwDOJ0Z1Ps5pKOV7
| 3,034
|
patch: use default locale in wpm tokenizer
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-10T04:49:54
| 2024-03-10T05:12:13
| 2024-03-10T05:12:12
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3034",
"html_url": "https://github.com/ollama/ollama/pull/3034",
"diff_url": "https://github.com/ollama/ollama/pull/3034.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3034.patch",
"merged_at": "2024-03-10T05:12:12"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3034/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8158
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8158/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8158/comments
|
https://api.github.com/repos/ollama/ollama/issues/8158/events
|
https://github.com/ollama/ollama/issues/8158
| 2,748,224,813
|
I_kwDOJ0Z1Ps6jzpUt
| 8,158
|
IBM Granite MoE & Dense-2b is very slow when KV Cache quantization is enabled
|
{
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/followers",
"following_url": "https://api.github.com/users/vYLQs6/following{/other_user}",
"gists_url": "https://api.github.com/users/vYLQs6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vYLQs6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vYLQs6/subscriptions",
"organizations_url": "https://api.github.com/users/vYLQs6/orgs",
"repos_url": "https://api.github.com/users/vYLQs6/repos",
"events_url": "https://api.github.com/users/vYLQs6/events{/privacy}",
"received_events_url": "https://api.github.com/users/vYLQs6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 3
| 2024-12-18T16:11:21
| 2025-01-03T16:59:53
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I found all Granite MoE models + dense:2b runs extremely slow when KV Cache is enabled, there didn't seems to be any hit on models response quality, just speed, kinda strange
I'm using Windows 11 + RTX 4090
Here is an example using model: granite3.1-moe:3b-instruct-q8_0
### `set OLLAMA_FLASH_ATTENTION=1 && set OLLAMA_KV_CACHE_TYPE=q8_0 && ollama serve`
```
>>> how far is the moon
The distance from Earth to the Moon can vary due to the elliptical shape of its orbit around our planet. On
average, it's about 238,855 miles (384,400 kilometers) away from Earth. However, this is approximately 238,855
miles (384,400 kilometers) at its closest approach and can range up to 252,088 miles (405,696 kilometers) during
its farthest point in its elliptical path.
total duration: 8.3218603s
load duration: 15.7633ms
prompt eval count: 49 token(s)
prompt eval duration: 242ms
prompt eval rate: 202.48 tokens/s
eval count: 130 token(s)
eval duration: 8.005s
eval rate: 16.24 tokens/s
```
### `ollama serve`
```
>>> how far is the moon
The average distance from Earth to the Moon is approximately 238,855 miles (384,400 kilometers). However, it's
important to note that this can fluctuate slightly due to the elliptical nature of its orbit around our planet. At
its closest point, known as perigee, it's about 225,623 miles (363,104 kilometers) away from Earth, while at its
farthest point, called apogee, it can reach up to 252,088 miles (405,696 kilometers).
total duration: 4.2702016s
load duration: 805.8374ms
prompt eval count: 193 token(s)
prompt eval duration: 287ms
prompt eval rate: 672.47 tokens/s
eval count: 142 token(s)
eval duration: 3.115s
eval rate: 45.59 tokens/s
```
---
granite3.1-dense:2b also have the same issue
`ollama run granite3.1-dense:2b-instruct-q8_0 --verbose`
### `set OLLAMA_FLASH_ATTENTION=1 && set OLLAMA_KV_CACHE_TYPE=q8_0 && ollama serve`
```
>>> how far is the moon
As previously mentioned, the average distance from the Earth to the Moon is approximately 238,855 miles (384,400
kilometers). This value remains constant throughout their orbital motion around each other.
total duration: 3.7165709s
load duration: 847.2622ms
prompt eval count: 124 token(s)
prompt eval duration: 93ms
prompt eval rate: 1333.33 tokens/s
eval count: 52 token(s)
eval duration: 2.717s
eval rate: 19.14 tokens/s
```
### `ollama serve`
```
>>> how far is the moon
The average distance from the Earth to the Moon is about 238,855 miles (384,400 kilometers). This distance is
often referred to as the semi-major axis of the Moon's elliptical orbit around the Earth.
total duration: 815.8894ms
load duration: 16.3291ms
prompt eval count: 49 token(s)
prompt eval duration: 286ms
prompt eval rate: 171.33 tokens/s
eval count: 61 token(s)
eval duration: 458ms
eval rate: 133.19 tokens/s
```
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8158/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8158/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4154
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4154/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4154/comments
|
https://api.github.com/repos/ollama/ollama/issues/4154/events
|
https://github.com/ollama/ollama/pull/4154
| 2,279,153,246
|
PR_kwDOJ0Z1Ps5ujLbX
| 4,154
|
Centralize server config handling
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-04T18:48:27
| 2024-05-06T00:08:30
| 2024-05-06T00:08:26
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4154",
"html_url": "https://github.com/ollama/ollama/pull/4154",
"diff_url": "https://github.com/ollama/ollama/pull/4154.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4154.patch",
"merged_at": "2024-05-06T00:08:26"
}
|
This moves all the env var reading into one central module and logs the loaded config once at startup which should help in troubleshooting user server logs
Example server output
```
% ollama serve 2>&1 | head -1
2024/05/05 14:56:27 routes.go:989: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4154/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4154/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2421
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2421/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2421/comments
|
https://api.github.com/repos/ollama/ollama/issues/2421/events
|
https://github.com/ollama/ollama/issues/2421
| 2,126,544,021
|
I_kwDOJ0Z1Ps5-wHyV
| 2,421
|
Offline models are not appearing on the Ollama server list
|
{
"login": "HariKunapareddy",
"id": 29164891,
"node_id": "MDQ6VXNlcjI5MTY0ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/29164891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HariKunapareddy",
"html_url": "https://github.com/HariKunapareddy",
"followers_url": "https://api.github.com/users/HariKunapareddy/followers",
"following_url": "https://api.github.com/users/HariKunapareddy/following{/other_user}",
"gists_url": "https://api.github.com/users/HariKunapareddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HariKunapareddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HariKunapareddy/subscriptions",
"organizations_url": "https://api.github.com/users/HariKunapareddy/orgs",
"repos_url": "https://api.github.com/users/HariKunapareddy/repos",
"events_url": "https://api.github.com/users/HariKunapareddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/HariKunapareddy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-02-09T06:19:27
| 2024-03-07T23:22:17
| 2024-03-07T23:22:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**Problem**
I download model on the the machine where Ollama installed and have internet access. Then moved the model files from the folder usr/share/ollama/.ollama/models to the new machine which doesn’t have internet access. I could see Ollama is not detecting those models and they are not visible as part of list command.
**Expected**
Ollama list
should list the new models files transferred.
|
{
"login": "HariKunapareddy",
"id": 29164891,
"node_id": "MDQ6VXNlcjI5MTY0ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/29164891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HariKunapareddy",
"html_url": "https://github.com/HariKunapareddy",
"followers_url": "https://api.github.com/users/HariKunapareddy/followers",
"following_url": "https://api.github.com/users/HariKunapareddy/following{/other_user}",
"gists_url": "https://api.github.com/users/HariKunapareddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HariKunapareddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HariKunapareddy/subscriptions",
"organizations_url": "https://api.github.com/users/HariKunapareddy/orgs",
"repos_url": "https://api.github.com/users/HariKunapareddy/repos",
"events_url": "https://api.github.com/users/HariKunapareddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/HariKunapareddy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2421/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1686
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1686/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1686/comments
|
https://api.github.com/repos/ollama/ollama/issues/1686/events
|
https://github.com/ollama/ollama/issues/1686
| 2,054,721,496
|
I_kwDOJ0Z1Ps56eI_Y
| 1,686
|
OLLAMA_ORIGINS=chrome-extension://etc does not work
|
{
"login": "sublimator",
"id": 525211,
"node_id": "MDQ6VXNlcjUyNTIxMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/525211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sublimator",
"html_url": "https://github.com/sublimator",
"followers_url": "https://api.github.com/users/sublimator/followers",
"following_url": "https://api.github.com/users/sublimator/following{/other_user}",
"gists_url": "https://api.github.com/users/sublimator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sublimator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sublimator/subscriptions",
"organizations_url": "https://api.github.com/users/sublimator/orgs",
"repos_url": "https://api.github.com/users/sublimator/repos",
"events_url": "https://api.github.com/users/sublimator/events{/privacy}",
"received_events_url": "https://api.github.com/users/sublimator/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2023-12-23T08:43:32
| 2024-01-06T01:20:11
| 2024-01-06T01:20:11
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
(base) ➜ ~ OLLAMA_ORIGINS=chrome-extension://dofdpnoclkigpakdndmhigfojjecnfln ollama serve
2023/12/23 15:37:56 images.go:737: total blobs: 85
2023/12/23 15:37:56 images.go:744: total unused blobs removed: 0
panic: bad origin: origins must contain '*' or include http://,https://
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1686/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1952
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1952/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1952/comments
|
https://api.github.com/repos/ollama/ollama/issues/1952/events
|
https://github.com/ollama/ollama/issues/1952
| 2,078,935,668
|
I_kwDOJ0Z1Ps576gp0
| 1,952
|
`CUDA out of memory` when using long prompts and context sizes
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 20
| 2024-01-12T14:14:45
| 2024-12-17T07:19:31
| 2024-05-10T23:23:27
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When using a large context window (via `num_ctx`) and providing a large prompt, Ollama may run out of memory.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1952/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2631
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2631/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2631/comments
|
https://api.github.com/repos/ollama/ollama/issues/2631/events
|
https://github.com/ollama/ollama/issues/2631
| 2,146,510,435
|
I_kwDOJ0Z1Ps5_8SZj
| 2,631
|
OpenLlama on Intel graphics card?
|
{
"login": "tambetvali",
"id": 119508958,
"node_id": "U_kgDOBx-P3g",
"avatar_url": "https://avatars.githubusercontent.com/u/119508958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tambetvali",
"html_url": "https://github.com/tambetvali",
"followers_url": "https://api.github.com/users/tambetvali/followers",
"following_url": "https://api.github.com/users/tambetvali/following{/other_user}",
"gists_url": "https://api.github.com/users/tambetvali/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tambetvali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tambetvali/subscriptions",
"organizations_url": "https://api.github.com/users/tambetvali/orgs",
"repos_url": "https://api.github.com/users/tambetvali/repos",
"events_url": "https://api.github.com/users/tambetvali/events{/privacy}",
"received_events_url": "https://api.github.com/users/tambetvali/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-02-21T11:25:04
| 2024-07-24T22:35:09
| 2024-07-24T22:35:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello!
I'm using CodeLlama-7b on Ubuntu 22.01, Visual Studio Code 1.86 or Ollama command line tool, HP ProBook 440 G6 with Intel® Core™ i3-8145U CPU @ 2.10GHz × 4, 16 GB memory and Mesa Intel® UHD Graphics 620 (WHL GT2) graphics card, which they call also Intel Corporation WhiskeyLake-U GT2 [UHD Graphics 620].
I installed IPEX (Intel Extension for PyTorch) and in the process something like CUDA drivers from Intel's site. It's hard to get something accelerated with this, tough.
With this setting, CodeLlama from Visual Studio Code is extremely slow and CodeLlama from command line is also quite slow, and I'm using only the 7b version.
Instructions to update CodeLlama tell me things, which do not seem to be very easy - I am asked to install numerous things from HuggingFace, but there seems to be no link between HuggingFace database, Ollama tool and Visual Studio Code - neither of them allows me to directly use this HuggingFace training. I don't actually understand, what it lacks and why I simply do not have acceleration option from those other tools. Also it seems more like building, than installing, an application - so it won't update automatically, for example (maybe unless I want to do this on Gentoo).
I have the following questions:
* Should the 7b database be slow or fast on my computer (right now I wait several minutes on VSC and for simple questions, some time on Ollama) and is this possible to get going with 13b or 70b databases on this computer?
* What I have to do if I want to get intel acceleration on VSC and Ollama applications/plugins, and not as separate python application, and to integrate them into my infrastructure with automatic updates, like all other programs have. After this, should the Ollama application be fast or slow on my computer? I don't know what I should expect from my computer, is searching for acceleration methods worth the time etc. Maybe it should alert with red when the processor use is not acceptable and it cannot live well with another programs, telling me "CodeLlama is slowed down by 17% by Firefox and PDF reader", or that I should replace VSC with another IDE to get better results ..with this speed, I don't even expect I'm going to use some real-time code completion etc.
Is there somewhere Ollama speed benchmark for different generations of processors, memory amounts and graphics cards, and drivers, and other relevant parameters, and installation options? Why don't you have anonymous information collection with those parameters, speeds of answers with information on the complexity of answering (if there are some varying parameters for each question) and resource load (are other apps using the processor, memory etc. actively). From this - how relevant is my computer use pattern, settings, computer type, OS, and the tool from which I use CodeLlama, to it's overall speed and what I should expect if I fall into higher rank of users, who, for example, have installed HuggingFace Ollama or got their accelerations updated otherwise (this way, it should be synchronized also with different builds, using the same database). I don't know what you should do with hackers, who report false information, maybe there should be also trust networks :) I think performance database for AI's shared between users is somewhat critical for getting their applications installed - for example, I don't really know does 70b database drop instantly onto hard drive from memory on my computer, or what happens if another program is actively using half of the memory - I should see those relations for my computer, and how the other users have boosted the same database, the same training, the same algorithm etc., and got it going with different clients to Ollama server and others - maybe the possibility, also, to test with standard test (questions, check of answers) and explain, how this is boosted, or select checkboxes for acceleration, moderation of computer use etc.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2631/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2631/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7477
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7477/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7477/comments
|
https://api.github.com/repos/ollama/ollama/issues/7477/events
|
https://github.com/ollama/ollama/issues/7477
| 2,630,958,746
|
I_kwDOJ0Z1Ps6c0T6a
| 7,477
|
Submit 4 images to Ollama visual model, generate a large amount of log without any return
|
{
"login": "delubee",
"id": 36232221,
"node_id": "MDQ6VXNlcjM2MjMyMjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/36232221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/delubee",
"html_url": "https://github.com/delubee",
"followers_url": "https://api.github.com/users/delubee/followers",
"following_url": "https://api.github.com/users/delubee/following{/other_user}",
"gists_url": "https://api.github.com/users/delubee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/delubee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/delubee/subscriptions",
"organizations_url": "https://api.github.com/users/delubee/orgs",
"repos_url": "https://api.github.com/users/delubee/repos",
"events_url": "https://api.github.com/users/delubee/events{/privacy}",
"received_events_url": "https://api.github.com/users/delubee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-11-03T04:31:53
| 2024-11-06T16:57:49
| 2024-11-06T16:57:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
code: python
```python
# -*- coding: utf-8 -*-
from ollama import Client
import pymupdf as fitz
import os
import base64
client = Client(host='http://127.0.0.1:11434')
pdf_path = './books/book2.pdf'
doc = fitz.open(pdf_path)
image_base64_list = []
for i in range(min(4, doc.page_count)):
page = doc.load_page(i)
pix = page.get_pixmap()
image_path = f'./images/page_{i + 1}.jpg'
pix.save(image_path)
with open(image_path, 'rb') as image_file:
encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
image_base64_list.append(encoded_string)
doc.close()
response = client.chat(model='llava:7b', messages=[
{
'role': 'user',
'content': 'Extract information from the image including book title, author, publisher, publication date, ISBN, main content, etc. if available.',
'images': image_base64_list
},
])
print(response['message']['content'])
```
no return,generate a large amount of log without any return.
logs:
```
2024/11/03 12:11:34 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:d:\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-11-03T12:11:34.700+08:00 level=INFO source=images.go:754 msg="total blobs: 101"
time=2024-11-03T12:11:34.714+08:00 level=INFO source=images.go:761 msg="total unused blobs removed: 0"
time=2024-11-03T12:11:34.720+08:00 level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)"
time=2024-11-03T12:11:34.722+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 rocm_v6.1 cpu cpu_avx cpu_avx2]"
time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=14 efficiency=0 threads=28
time=2024-11-03T12:11:34.722+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=14 efficiency=0 threads=28
time=2024-11-03T12:11:35.014+08:00 level=INFO source=gpu.go:326 msg="detected OS VRAM overhead" id=GPU-789bc630-2559-016c-8a5f-b30f23ffd42e library=cuda compute=6.1 driver=12.6 name="Tesla P40" overhead="146.4 MiB"
time=2024-11-03T12:11:35.317+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-789bc630-2559-016c-8a5f-b30f23ffd42e library=cuda variant=v12 compute=6.1 driver=12.6 name="Tesla P40" total="22.4 GiB" available="22.2 GiB"
time=2024-11-03T12:11:35.317+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-b478a8b5-91bf-57f6-450d-608b615acd97 library=cuda variant=v12 compute=6.1 driver=12.6 name="NVIDIA GeForce GTX 1080 Ti" total="11.0 GiB" available="10.0 GiB"
[GIN] 2024/11/03 - 12:11:35 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/03 - 12:11:35 | 200 | 26.2634ms | 127.0.0.1 | GET "/api/tags"
time=2024-11-03T12:11:40.590+08:00 level=WARN source=sched.go:137 msg="multimodal models don't support parallel requests yet"
time=2024-11-03T12:11:40.651+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=d:\models\blobs\sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 gpu=GPU-789bc630-2559-016c-8a5f-b30f23ffd42e parallel=1 available=23889838080 required="5.3 GiB"
time=2024-11-03T12:11:40.682+08:00 level=INFO source=server.go:105 msg="system memory" total="63.9 GiB" free="45.5 GiB" free_swap="46.0 GiB"
time=2024-11-03T12:11:40.684+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[5.3 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="181.0 MiB"
time=2024-11-03T12:11:40.706+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12\\ollama_llama_server.exe --model d:\\models\\blobs\\sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 --ctx-size 2048 --batch-size 512 --embedding --n-gpu-layers 33 --mmproj d:\\models\\blobs\\sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 --threads 14 --no-mmap --parallel 1 --port 8495"
time=2024-11-03T12:11:40.726+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-03T12:11:40.726+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
time=2024-11-03T12:11:40.727+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
INFO [wmain] starting c++ runner | tid="27324" timestamp=1730607100
INFO [wmain] build info | build=3871 commit="63424972" tid="27324" timestamp=1730607100
INFO [wmain] system info | n_threads=14 n_threads_batch=14 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="27324" timestamp=1730607100 total_threads=56
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="55" port="8495" tid="27324" timestamp=1730607101
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: Tesla P40, compute capability 6.1, VMM: no
time=2024-11-03T12:11:41.233+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
key clip.vision.image_grid_pinpoints not found in file
key clip.vision.mm_patch_merge_type not found in file
key clip.vision.image_crop_resolution not found in file
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from d:\models\blobs\sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = liuhaotian
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1637 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name = liuhaotian
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOG token = 2 '</s>'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size = 0.27 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 70.31 MiB
llm_load_tensors: CUDA0 buffer size = 3847.55 MiB
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
clip_model_load: model name: openai/clip-vit-large-patch14-336
clip_model_load: description: image encoder for LLaVA
clip_model_load: GGUF version: 3
clip_model_load: alignment: 32
clip_model_load: n_tensors: 377
clip_model_load: n_kv: 19
clip_model_load: ftype: f16
clip_model_load: loaded meta data with 19 key-value pairs and 377 tensors from d:\models\blobs\sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539
clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
clip_model_load: - kv 0: general.architecture str = clip
clip_model_load: - kv 1: clip.has_text_encoder bool = false
clip_model_load: - kv 2: clip.has_vision_encoder bool = true
clip_model_load: - kv 3: clip.has_llava_projector bool = true
clip_model_load: - kv 4: general.file_type u32 = 1
clip_model_load: - kv 5: general.name str = openai/clip-vit-large-patch14-336
clip_model_load: - kv 6: general.description str = image encoder for LLaVA
clip_model_load: - kv 7: clip.projector_type str = mlp
clip_model_load: - kv 8: clip.vision.image_size u32 = 336
clip_model_load: - kv 9: clip.vision.patch_size u32 = 14
clip_model_load: - kv 10: clip.vision.embedding_length u32 = 1024
clip_model_load: - kv 11: clip.vision.feed_forward_length u32 = 4096
clip_model_load: - kv 12: clip.vision.projection_dim u32 = 768
clip_model_load: - kv 13: clip.vision.attention.head_count u32 = 16
clip_model_load: - kv 14: clip.vision.attention.layer_norm_epsilon f32 = 0.000010
clip_model_load: - kv 15: clip.vision.block_count u32 = 23
clip_model_load: - kv 16: clip.vision.image_mean arr[f32,3] = [0.481455, 0.457828, 0.408211]
clip_model_load: - kv 17: clip.vision.image_std arr[f32,3] = [0.268630, 0.261303, 0.275777]
clip_model_load: - kv 18: clip.use_gelu bool = false
clip_model_load: - type f32: 235 tensors
clip_model_load: - type f16: 142 tensors
clip_model_load: CLIP using CUDA backend
clip_model_load: text_encoder: 0
clip_model_load: vision_encoder: 1
clip_model_load: llava_projector: 1
clip_model_load: minicpmv_projector: 0
clip_model_load: model size: 595.49 MB
clip_model_load: metadata size: 0.13 MB
clip_model_load: params backend buffer size = 595.49 MB (377 tensors)
clip_model_load: compute allocated memory: 32.89 MB
INFO [wmain] model loaded | tid="27324" timestamp=1730607103
time=2024-11-03T12:11:43.537+08:00 level=INFO source=server.go:626 msg="llama runner started in 2.81 seconds"
encode_image_with_clip: image embedding created: 576 tokens
encode_image_with_clip: image encoded in 125.75 ms by CLIP ( 0.22 ms per image patch)
encode_image_with_clip: image embedding created: 576 tokens
encode_image_with_clip: image encoded in 116.25 ms by CLIP ( 0.20 ms per image patch)
encode_image_with_clip: image embedding created: 576 tokens
encode_image_with_clip: image encoded in 112.17 ms by CLIP ( 0.19 ms per image patch)
encode_image_with_clip: image embedding created: 576 tokens
encode_image_with_clip: image encoded in 109.71 ms by CLIP ( 0.19 ms per image patch)
ERROR [update_slots] failed processing images | slot_id=0 task_id=2 tid="27324" timestamp=1730607106
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
```
repeat
```
llama_decode_internal: invalid token[0] = -1462573600
llama_decode: failed to decode, ret = -1
```
>
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.14
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7477/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2046
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2046/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2046/comments
|
https://api.github.com/repos/ollama/ollama/issues/2046/events
|
https://github.com/ollama/ollama/issues/2046
| 2,087,802,806
|
I_kwDOJ0Z1Ps58cVe2
| 2,046
|
What quantization is used to quantize Phi-2?
|
{
"login": "bm777",
"id": 29865600,
"node_id": "MDQ6VXNlcjI5ODY1NjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/29865600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bm777",
"html_url": "https://github.com/bm777",
"followers_url": "https://api.github.com/users/bm777/followers",
"following_url": "https://api.github.com/users/bm777/following{/other_user}",
"gists_url": "https://api.github.com/users/bm777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bm777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bm777/subscriptions",
"organizations_url": "https://api.github.com/users/bm777/orgs",
"repos_url": "https://api.github.com/users/bm777/repos",
"events_url": "https://api.github.com/users/bm777/events{/privacy}",
"received_events_url": "https://api.github.com/users/bm777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-18T08:49:39
| 2024-01-19T04:05:23
| 2024-01-19T04:05:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Running Phi-2 with Ollama is faster than running Phi-2 in Rust with Candle. rust is taking 1.7 GB of my memory while Ollama only 788MB of memory. I guess it is using the same GGUF quantized 1.6 Gb
Ollama is
- quantizing it at run time or
- it does it before hand
- or using lama.cpp under the hood
- no quantization at all.
----
|
{
"login": "bm777",
"id": 29865600,
"node_id": "MDQ6VXNlcjI5ODY1NjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/29865600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bm777",
"html_url": "https://github.com/bm777",
"followers_url": "https://api.github.com/users/bm777/followers",
"following_url": "https://api.github.com/users/bm777/following{/other_user}",
"gists_url": "https://api.github.com/users/bm777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bm777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bm777/subscriptions",
"organizations_url": "https://api.github.com/users/bm777/orgs",
"repos_url": "https://api.github.com/users/bm777/repos",
"events_url": "https://api.github.com/users/bm777/events{/privacy}",
"received_events_url": "https://api.github.com/users/bm777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2046/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8337
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8337/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8337/comments
|
https://api.github.com/repos/ollama/ollama/issues/8337/events
|
https://github.com/ollama/ollama/issues/8337
| 2,773,119,533
|
I_kwDOJ0Z1Ps6lSnIt
| 8,337
|
Cannot get a tool call and a message in the same response
|
{
"login": "gotyer",
"id": 88534676,
"node_id": "MDQ6VXNlcjg4NTM0Njc2",
"avatar_url": "https://avatars.githubusercontent.com/u/88534676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gotyer",
"html_url": "https://github.com/gotyer",
"followers_url": "https://api.github.com/users/gotyer/followers",
"following_url": "https://api.github.com/users/gotyer/following{/other_user}",
"gists_url": "https://api.github.com/users/gotyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gotyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gotyer/subscriptions",
"organizations_url": "https://api.github.com/users/gotyer/orgs",
"repos_url": "https://api.github.com/users/gotyer/repos",
"events_url": "https://api.github.com/users/gotyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/gotyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2025-01-07T15:28:22
| 2025-01-27T15:34:46
| 2025-01-08T07:30:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I use the api/chat endpoint, and I can make an agent either respond normally, or call a tool. It can do either successfully, but never both in the same call.
Whenever a tool is called, the content is empty.
Even if I explicitly prompt the model to write a long text before using the tool, the content field of the response it still empty.
Here's an example of an agent calling a tool (the tool is the act of querying another agent).
```
[{
"created_at": "2025-01-07T15:07:29.9089713Z",
"done": false,
"message": {
"content": "", ---CONTENT IS ALWAYS EMPTY---
"role": "assistant",
"tool_calls": [{
"function": {
"arguments": {
"query": "Add a new contact named Charlie with email charlie@example.com, role CFO, company Z"
},
"name": "query_contact_manager"
}
}
]
},
"model": "llama3.1"
}, {
"created_at": "2025-01-07T15:07:29.91725Z",
"done": true,
"done_reason": "stop",
"eval_count": 35,
"eval_duration": 305000000,
"load_duration": 2245466100,
"message": {
"content": "",
"role": "assistant"
},
"model": "llama3.1",
"prompt_eval_count": 478,
"prompt_eval_duration": 181000000,
"total_duration": 3106890500
}
]
```
I for more context, the api body:
(there's no weird options enabled)
```
data = {
"model": self.model,
"messages": messages,
"tools": ollama_tools,
"options": {
"num_ctx": self.num_ctx,
"temperature":0.1,
}
}
```
Am I misunderstanding something, or is there a mistake on the ollama server?
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.1
|
{
"login": "gotyer",
"id": 88534676,
"node_id": "MDQ6VXNlcjg4NTM0Njc2",
"avatar_url": "https://avatars.githubusercontent.com/u/88534676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gotyer",
"html_url": "https://github.com/gotyer",
"followers_url": "https://api.github.com/users/gotyer/followers",
"following_url": "https://api.github.com/users/gotyer/following{/other_user}",
"gists_url": "https://api.github.com/users/gotyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gotyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gotyer/subscriptions",
"organizations_url": "https://api.github.com/users/gotyer/orgs",
"repos_url": "https://api.github.com/users/gotyer/repos",
"events_url": "https://api.github.com/users/gotyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/gotyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8337/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4969
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4969/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4969/comments
|
https://api.github.com/repos/ollama/ollama/issues/4969/events
|
https://github.com/ollama/ollama/issues/4969
| 2,345,097,011
|
I_kwDOJ0Z1Ps6Lx1cz
| 4,969
|
Error pulling Quen2 models: unknown pre-tokenizer type: 'qwen2'
|
{
"login": "agilebean",
"id": 28497542,
"node_id": "MDQ6VXNlcjI4NDk3NTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/28497542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agilebean",
"html_url": "https://github.com/agilebean",
"followers_url": "https://api.github.com/users/agilebean/followers",
"following_url": "https://api.github.com/users/agilebean/following{/other_user}",
"gists_url": "https://api.github.com/users/agilebean/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agilebean/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agilebean/subscriptions",
"organizations_url": "https://api.github.com/users/agilebean/orgs",
"repos_url": "https://api.github.com/users/agilebean/repos",
"events_url": "https://api.github.com/users/agilebean/events{/privacy}",
"received_events_url": "https://api.github.com/users/agilebean/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-06-11T00:44:31
| 2024-06-18T20:30:38
| 2024-06-18T20:30:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After downloading any Qwen2 model, e.g.
```
ollama run qwen2:1.5b
```
the following error is shown:
```
Error: llama runner process has terminated: signal: abort trap error:error loading model vocabulary: unknown pre-tokenizer type: 'qwen2'
```
tested on Ollama versions `0.1.38` and `0.1.42`
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.38
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4969/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8119
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8119/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8119/comments
|
https://api.github.com/repos/ollama/ollama/issues/8119/events
|
https://github.com/ollama/ollama/pull/8119
| 2,742,652,981
|
PR_kwDOJ0Z1Ps6FXwC6
| 8,119
|
docs: change getting started guide link for pgai
|
{
"login": "Askir",
"id": 10251955,
"node_id": "MDQ6VXNlcjEwMjUxOTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10251955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Askir",
"html_url": "https://github.com/Askir",
"followers_url": "https://api.github.com/users/Askir/followers",
"following_url": "https://api.github.com/users/Askir/following{/other_user}",
"gists_url": "https://api.github.com/users/Askir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Askir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Askir/subscriptions",
"organizations_url": "https://api.github.com/users/Askir/orgs",
"repos_url": "https://api.github.com/users/Askir/repos",
"events_url": "https://api.github.com/users/Askir/events{/privacy}",
"received_events_url": "https://api.github.com/users/Askir/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-16T15:12:29
| 2024-12-17T06:13:23
| 2024-12-17T06:13:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8119",
"html_url": "https://github.com/ollama/ollama/pull/8119",
"diff_url": "https://github.com/ollama/ollama/pull/8119.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8119.patch",
"merged_at": "2024-12-17T06:13:23"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8119/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4176
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4176/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4176/comments
|
https://api.github.com/repos/ollama/ollama/issues/4176/events
|
https://github.com/ollama/ollama/issues/4176
| 2,279,700,328
|
I_kwDOJ0Z1Ps6H4Xdo
| 4,176
|
models aren't saving on linux
|
{
"login": "asmrtfm",
"id": 154548075,
"node_id": "U_kgDOCTY3aw",
"avatar_url": "https://avatars.githubusercontent.com/u/154548075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asmrtfm",
"html_url": "https://github.com/asmrtfm",
"followers_url": "https://api.github.com/users/asmrtfm/followers",
"following_url": "https://api.github.com/users/asmrtfm/following{/other_user}",
"gists_url": "https://api.github.com/users/asmrtfm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asmrtfm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asmrtfm/subscriptions",
"organizations_url": "https://api.github.com/users/asmrtfm/orgs",
"repos_url": "https://api.github.com/users/asmrtfm/repos",
"events_url": "https://api.github.com/users/asmrtfm/events{/privacy}",
"received_events_url": "https://api.github.com/users/asmrtfm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-05-05T19:49:52
| 2024-05-05T19:57:52
| 2024-05-05T19:57:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "asmrtfm",
"id": 154548075,
"node_id": "U_kgDOCTY3aw",
"avatar_url": "https://avatars.githubusercontent.com/u/154548075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asmrtfm",
"html_url": "https://github.com/asmrtfm",
"followers_url": "https://api.github.com/users/asmrtfm/followers",
"following_url": "https://api.github.com/users/asmrtfm/following{/other_user}",
"gists_url": "https://api.github.com/users/asmrtfm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asmrtfm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asmrtfm/subscriptions",
"organizations_url": "https://api.github.com/users/asmrtfm/orgs",
"repos_url": "https://api.github.com/users/asmrtfm/repos",
"events_url": "https://api.github.com/users/asmrtfm/events{/privacy}",
"received_events_url": "https://api.github.com/users/asmrtfm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4176/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6772
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6772/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6772/comments
|
https://api.github.com/repos/ollama/ollama/issues/6772/events
|
https://github.com/ollama/ollama/issues/6772
| 2,521,652,968
|
I_kwDOJ0Z1Ps6WTV7o
| 6,772
|
Using the qwen2-7B-q8 model, the GPU can reach 14-15GB when accessed with the Dify API
|
{
"login": "bingbing6",
"id": 51957370,
"node_id": "MDQ6VXNlcjUxOTU3Mzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/51957370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bingbing6",
"html_url": "https://github.com/bingbing6",
"followers_url": "https://api.github.com/users/bingbing6/followers",
"following_url": "https://api.github.com/users/bingbing6/following{/other_user}",
"gists_url": "https://api.github.com/users/bingbing6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bingbing6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bingbing6/subscriptions",
"organizations_url": "https://api.github.com/users/bingbing6/orgs",
"repos_url": "https://api.github.com/users/bingbing6/repos",
"events_url": "https://api.github.com/users/bingbing6/events{/privacy}",
"received_events_url": "https://api.github.com/users/bingbing6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-12T08:02:25
| 2024-11-06T00:28:39
| 2024-11-06T00:28:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using the qwen2-7B-q8 model on Dify's API, the GPU is up to 15GB, but directly using Ollama's API is the normal 9GB, and Dify says that they also call Ollama's API normally, without doing any other processing. See my question to them for details:
https://github.com/langgenius/dify/issues/8294
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.3.10
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6772/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/354
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/354/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/354/comments
|
https://api.github.com/repos/ollama/ollama/issues/354/events
|
https://github.com/ollama/ollama/pull/354
| 1,851,895,162
|
PR_kwDOJ0Z1Ps5X_lbE
| 354
|
retry download on network errors
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-15T18:09:21
| 2023-08-17T14:31:46
| 2023-08-17T14:31:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/354",
"html_url": "https://github.com/ollama/ollama/pull/354",
"diff_url": "https://github.com/ollama/ollama/pull/354.diff",
"patch_url": "https://github.com/ollama/ollama/pull/354.patch",
"merged_at": "2023-08-17T14:31:45"
}
|
add a retry mechanism to retry download on error
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/354/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1967
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1967/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1967/comments
|
https://api.github.com/repos/ollama/ollama/issues/1967/events
|
https://github.com/ollama/ollama/issues/1967
| 2,079,730,533
|
I_kwDOJ0Z1Ps579itl
| 1,967
|
Small context size limit occasionally causes Ollama to hang on prediction
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-01-12T21:13:23
| 2024-05-10T01:03:32
| 2024-05-10T01:03:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
While testing #1963 I noticed that occasionally a small context window causes the response to hang (maybe indefinitely?).
Here is an example:
```
curl http://localhost:11434/api/chat -d '{
"model": "llama2",
"messages": [
{
"role": "user",
"content": "hi"
},
{
"role": "assistant",
"content": "Hi there! How can I help you today?"
},
{
"role": "user",
"content": "what is your name?"
},
{
"role": "assistant",
"content": "My name is Bard."
},
{
"role": "user",
"content": "my name is joe"
}
],
"options": {
"num_ctx": 30
}
}'
{"model":"llama2","created_at":"2024-01-12T20:40:01.125385Z","message":{"role":"assistant","content":"Hello"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.139753Z","message":{"role":"assistant","content":" Joe"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.168206Z","message":{"role":"assistant","content":"!"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.18237Z","message":{"role":"assistant","content":" *"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.196786Z","message":{"role":"assistant","content":"g"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.210965Z","message":{"role":"assistant","content":"igg"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.225267Z","message":{"role":"assistant","content":"les"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.23965Z","message":{"role":"assistant","content":"*"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.253971Z","message":{"role":"assistant","content":" It"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.268295Z","message":{"role":"assistant","content":"'"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.282805Z","message":{"role":"assistant","content":"s"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.296887Z","message":{"role":"assistant","content":" nice"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.311215Z","message":{"role":"assistant","content":" to"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.325513Z","message":{"role":"assistant","content":" meet"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.339753Z","message":{"role":"assistant","content":" you"},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.353986Z","message":{"role":"assistant","content":"."},"done":false}
{"model":"llama2","created_at":"2024-01-12T20:40:01.367968Z","message":{"role":"assistant","content":" "},"done":false}
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1967/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3392
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3392/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3392/comments
|
https://api.github.com/repos/ollama/ollama/issues/3392/events
|
https://github.com/ollama/ollama/pull/3392
| 2,213,936,677
|
PR_kwDOJ0Z1Ps5rF5hO
| 3,392
|
CI windows gpu builds
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-28T19:08:23
| 2024-03-28T23:03:55
| 2024-03-28T23:03:52
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3392",
"html_url": "https://github.com/ollama/ollama/pull/3392",
"diff_url": "https://github.com/ollama/ollama/pull/3392.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3392.patch",
"merged_at": "2024-03-28T23:03:52"
}
|
Changes in the llm dir (e.g. bumping llama.cpp) can result in regressions on windows. This will help us catch build failures in CI.
This also moves to a hand-crafted cuda install for windows instead of the action we were using that turned out to be buggy.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3392/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3232
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3232/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3232/comments
|
https://api.github.com/repos/ollama/ollama/issues/3232/events
|
https://github.com/ollama/ollama/issues/3232
| 2,193,822,398
|
I_kwDOJ0Z1Ps6CwxK-
| 3,232
|
CUDA error: out of memory when use gemma model
|
{
"login": "ycyy",
"id": 10897377,
"node_id": "MDQ6VXNlcjEwODk3Mzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/10897377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ycyy",
"html_url": "https://github.com/ycyy",
"followers_url": "https://api.github.com/users/ycyy/followers",
"following_url": "https://api.github.com/users/ycyy/following{/other_user}",
"gists_url": "https://api.github.com/users/ycyy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ycyy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ycyy/subscriptions",
"organizations_url": "https://api.github.com/users/ycyy/orgs",
"repos_url": "https://api.github.com/users/ycyy/repos",
"events_url": "https://api.github.com/users/ycyy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ycyy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-03-19T01:36:18
| 2024-04-18T05:34:01
| 2024-04-17T22:52:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
system_message = {
'role': 'system',
'content': 'XXXX'
}
user_message = {
'role': 'user',
'content': 'XXXX'
}
messages.append(system_message)
messages.append(user_message)
stream = ollama.chat(
model = model_name,
messages = messages,
stream=True
)
```
when i use gemma do this work,it will be CUDA error: out of memory.the log is here.
```
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3070 Laptop GPU, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 24 key-value pairs and 254 tensors from D:\Ollama\models\blobs\sha256-456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a7bbb8f7aad9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma
llama_model_loader: - kv 1: general.name str = gemma-7b-it
llama_model_loader: - kv 2: gemma.context_length u32 = 8192
llama_model_loader: - kv 3: gemma.embedding_length u32 = 3072
llama_model_loader: - kv 4: gemma.block_count u32 = 28
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 24576
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 16
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 16
llama_model_loader: - kv 8: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: gemma.attention.key_length u32 = 256
llama_model_loader: - kv 10: gemma.attention.value_length u32 = 256
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - kv 23: general.file_type u32 = 2
llama_model_loader: - type f32: 57 tensors
llama_model_loader: - type q4_0: 196 tensors
llama_model_loader: - type q8_0: 1 tensors
llm_load_vocab: mismatch in special tokens definition ( 416/256000 vs 260/256000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = gemma
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 256000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 3072
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_rot = 192
llm_load_print_meta: n_embd_head_k = 256
llm_load_print_meta: n_embd_head_v = 256
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 24576
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.54 B
llm_load_print_meta: model size = 4.84 GiB (4.87 BPW)
llm_load_print_meta: general.name = gemma-7b-it
llm_load_print_meta: BOS token = 2 '<bos>'
llm_load_print_meta: EOS token = 1 '<eos>'
llm_load_print_meta: UNK token = 3 '<unk>'
llm_load_print_meta: PAD token = 0 '<pad>'
llm_load_print_meta: LF token = 227 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.19 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: CPU buffer size = 796.88 MiB
llm_load_tensors: CUDA0 buffer size = 4955.54 MiB
...........................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 896.00 MiB
llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 11.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 506.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 6.00 MiB
llama_new_context_with_model: graph splits (measure): 2
{"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"20160","timestamp":1710811450}
{"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"20160","timestamp":1710811450}
time=2024-03-19T09:24:10.303+08:00 level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"29364","timestamp":1710811450}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"29364","timestamp":1710811450}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":63,"slot_id":0,"task_id":0,"tid":"29364","timestamp":1710811450}
{"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"29364","timestamp":1710811450}
CUDA error: out of memory
current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:8658
cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1)
GGML_ASSERT: C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:256: !"CUDA error"
```
windows 11 and RTX 3070 Laptop GPU.Does anyone know if it’s a model error or another reason?
### What did you expect to see?
```
user_message = {
'role': 'user',
'content': 'XXXX'
}
messages.append(system_message)
stream = ollama.chat(
model = model_name,
messages = messages,
stream=True
)
```
if i use just like this it's ok.however other model(dolphin-mistral:latest,qwen:7b) doesn't have this problem.
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
x86
### Platform
_No response_
### Ollama version
0.1.29
### GPU
Nvidia
### GPU info
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 527.99 Driver Version: 527.99 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... WDDM | 00000000:01:00.0 On | N/A |
| N/A 45C P0 32W / 115W | 701MiB / 8192MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 5916 C+G ...y\ShellExperienceHost.exe N/A |
| 0 N/A N/A 7564 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 8684 C+G ...n1h2txyewy\SearchHost.exe N/A |
| 0 N/A N/A 8708 C+G ...artMenuExperienceHost.exe N/A |
| 0 N/A N/A 9864 C+G ...8bbwe\Notepad\Notepad.exe N/A |
| 0 N/A N/A 10100 C+G ...2txyewy\TextInputHost.exe N/A |
| 0 N/A N/A 11228 C+G ...cw5n1h2txyewy\LockApp.exe N/A |
| 0 N/A N/A 16052 C+G ...cal\Obsidian\Obsidian.exe N/A |
| 0 N/A N/A 19716 C+G ...2gh52qy24etm\Nahimic3.exe N/A |
| 0 N/A N/A 21696 C+G ...l-0.15.0\WeaselServer.exe N/A |
| 0 N/A N/A 22360 C+G ...8bbwe\WindowsTerminal.exe N/A |
| 0 N/A N/A 22876 C+G ...lPanel\SystemSettings.exe N/A |
| 0 N/A N/A 23712 C+G ...d\runtime\WeChatAppEx.exe N/A |
| 0 N/A N/A 26532 C+G E:\VSCode\Code.exe N/A |
| 0 N/A N/A 28660 C+G ...me\Application\chrome.exe N/A |
| 0 N/A N/A 30704 C+G ...ge\Application\msedge.exe N/A |
+-----------------------------------------------------------------------------+
```
### CPU
Intel
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3232/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2817
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2817/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2817/comments
|
https://api.github.com/repos/ollama/ollama/issues/2817/events
|
https://github.com/ollama/ollama/issues/2817
| 2,159,842,467
|
I_kwDOJ0Z1Ps6AvJSj
| 2,817
|
Model request: StarCoder2
|
{
"login": "kylemclaren",
"id": 3727384,
"node_id": "MDQ6VXNlcjM3MjczODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3727384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylemclaren",
"html_url": "https://github.com/kylemclaren",
"followers_url": "https://api.github.com/users/kylemclaren/followers",
"following_url": "https://api.github.com/users/kylemclaren/following{/other_user}",
"gists_url": "https://api.github.com/users/kylemclaren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylemclaren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylemclaren/subscriptions",
"organizations_url": "https://api.github.com/users/kylemclaren/orgs",
"repos_url": "https://api.github.com/users/kylemclaren/repos",
"events_url": "https://api.github.com/users/kylemclaren/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylemclaren/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-02-28T20:47:35
| 2024-06-06T17:42:00
| 2024-06-06T17:42:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please add the StarCoder2 model family: https://huggingface.co/collections/bigcode/starcoder2-65de6da6e87db3383572be1a
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2817/reactions",
"total_count": 32,
"+1": 26,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 6
}
|
https://api.github.com/repos/ollama/ollama/issues/2817/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4453
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4453/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4453/comments
|
https://api.github.com/repos/ollama/ollama/issues/4453/events
|
https://github.com/ollama/ollama/issues/4453
| 2,298,172,177
|
I_kwDOJ0Z1Ps6I-1MR
| 4,453
|
Ollama + sentence-transformers with torch cuda
|
{
"login": "qsdhj",
"id": 166700412,
"node_id": "U_kgDOCe-lfA",
"avatar_url": "https://avatars.githubusercontent.com/u/166700412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qsdhj",
"html_url": "https://github.com/qsdhj",
"followers_url": "https://api.github.com/users/qsdhj/followers",
"following_url": "https://api.github.com/users/qsdhj/following{/other_user}",
"gists_url": "https://api.github.com/users/qsdhj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qsdhj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qsdhj/subscriptions",
"organizations_url": "https://api.github.com/users/qsdhj/orgs",
"repos_url": "https://api.github.com/users/qsdhj/repos",
"events_url": "https://api.github.com/users/qsdhj/events{/privacy}",
"received_events_url": "https://api.github.com/users/qsdhj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-15T14:59:37
| 2024-11-06T17:33:04
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I use ollama together with the `intfloat/multilingual-e5-base` sentence-transformer in langchain and llamaIndex in python.
If I use the torch version without CUDA everything works as expected, just my embeddings are created slow.
With the torch cuda version installed this way:

As soon as I loaded the sentence-transformer in my python script the weird behaviour starts.
The first prompt to a model in ollama is working normal (takes a few seconds). From the second prompt onwards my GPU is on 100% load for a few minutes, than I get the response from the llm.
This happens with the llamaindex / langchain API in python and with the cli.
If I terminate my python script and restart ollama its working normal again.
I use a Laptop with Windows 11
11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz 2.50 GHz,
32GB Ram
NVIDIA RTX A3000 6GB
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.37
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4453/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8313
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8313/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8313/comments
|
https://api.github.com/repos/ollama/ollama/issues/8313/events
|
https://github.com/ollama/ollama/issues/8313
| 2,769,512,309
|
I_kwDOJ0Z1Ps6lE2d1
| 8,313
|
Tool example returns error 'TypeError: Object of type function is not JSON serializable'
|
{
"login": "zamzx",
"id": 34112226,
"node_id": "MDQ6VXNlcjM0MTEyMjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/34112226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zamzx",
"html_url": "https://github.com/zamzx",
"followers_url": "https://api.github.com/users/zamzx/followers",
"following_url": "https://api.github.com/users/zamzx/following{/other_user}",
"gists_url": "https://api.github.com/users/zamzx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zamzx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zamzx/subscriptions",
"organizations_url": "https://api.github.com/users/zamzx/orgs",
"repos_url": "https://api.github.com/users/zamzx/repos",
"events_url": "https://api.github.com/users/zamzx/events{/privacy}",
"received_events_url": "https://api.github.com/users/zamzx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-05T23:54:17
| 2025-01-16T00:03:33
| 2025-01-16T00:03:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Tool calling results in the following error with ollama 0.5.4
"TypeError: Object of type function is not JSON serializable"
I am using this exact example : https://github.com/ollama/ollama-python/blob/main/examples/tools.py
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8313/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3127
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3127/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3127/comments
|
https://api.github.com/repos/ollama/ollama/issues/3127/events
|
https://github.com/ollama/ollama/pull/3127
| 2,184,928,293
|
PR_kwDOJ0Z1Ps5pjjeI
| 3,127
|
add `OLLAMA_KEEP_ALIVE` to environment variable docs for `ollama serve`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-13T21:33:57
| 2024-03-13T21:35:34
| 2024-03-13T21:35:33
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3127",
"html_url": "https://github.com/ollama/ollama/pull/3127",
"diff_url": "https://github.com/ollama/ollama/pull/3127.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3127.patch",
"merged_at": "2024-03-13T21:35:33"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3127/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1252
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1252/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1252/comments
|
https://api.github.com/repos/ollama/ollama/issues/1252/events
|
https://github.com/ollama/ollama/pull/1252
| 2,007,519,977
|
PR_kwDOJ0Z1Ps5gMXZg
| 1,252
|
Fix unexpected multiline end
|
{
"login": "xwjdsh",
"id": 11025519,
"node_id": "MDQ6VXNlcjExMDI1NTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/11025519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xwjdsh",
"html_url": "https://github.com/xwjdsh",
"followers_url": "https://api.github.com/users/xwjdsh/followers",
"following_url": "https://api.github.com/users/xwjdsh/following{/other_user}",
"gists_url": "https://api.github.com/users/xwjdsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xwjdsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xwjdsh/subscriptions",
"organizations_url": "https://api.github.com/users/xwjdsh/orgs",
"repos_url": "https://api.github.com/users/xwjdsh/repos",
"events_url": "https://api.github.com/users/xwjdsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/xwjdsh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-23T05:40:31
| 2023-11-27T07:37:24
| 2023-11-26T18:04:12
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1252",
"html_url": "https://github.com/ollama/ollama/pull/1252",
"diff_url": "https://github.com/ollama/ollama/pull/1252.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1252.patch",
"merged_at": null
}
|
Close #1241
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1252/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7580
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7580/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7580/comments
|
https://api.github.com/repos/ollama/ollama/issues/7580/events
|
https://github.com/ollama/ollama/pull/7580
| 2,645,323,978
|
PR_kwDOJ0Z1Ps6BYH5N
| 7,580
|
Updated Model library with llama3.2-vision
|
{
"login": "arbusam",
"id": 35902952,
"node_id": "MDQ6VXNlcjM1OTAyOTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/35902952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arbusam",
"html_url": "https://github.com/arbusam",
"followers_url": "https://api.github.com/users/arbusam/followers",
"following_url": "https://api.github.com/users/arbusam/following{/other_user}",
"gists_url": "https://api.github.com/users/arbusam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arbusam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arbusam/subscriptions",
"organizations_url": "https://api.github.com/users/arbusam/orgs",
"repos_url": "https://api.github.com/users/arbusam/repos",
"events_url": "https://api.github.com/users/arbusam/events{/privacy}",
"received_events_url": "https://api.github.com/users/arbusam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-08T22:51:33
| 2024-11-10T21:36:25
| 2024-11-10T21:36:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7580",
"html_url": "https://github.com/ollama/ollama/pull/7580",
"diff_url": "https://github.com/ollama/ollama/pull/7580.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7580.patch",
"merged_at": "2024-11-10T21:36:25"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7580/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6722
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6722/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6722/comments
|
https://api.github.com/repos/ollama/ollama/issues/6722/events
|
https://github.com/ollama/ollama/issues/6722
| 2,515,684,938
|
I_kwDOJ0Z1Ps6V8k5K
| 6,722
|
MiniCPM3 support
|
{
"login": "IuvenisSapiens",
"id": 133093191,
"node_id": "U_kgDOB-7XRw",
"avatar_url": "https://avatars.githubusercontent.com/u/133093191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IuvenisSapiens",
"html_url": "https://github.com/IuvenisSapiens",
"followers_url": "https://api.github.com/users/IuvenisSapiens/followers",
"following_url": "https://api.github.com/users/IuvenisSapiens/following{/other_user}",
"gists_url": "https://api.github.com/users/IuvenisSapiens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IuvenisSapiens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IuvenisSapiens/subscriptions",
"organizations_url": "https://api.github.com/users/IuvenisSapiens/orgs",
"repos_url": "https://api.github.com/users/IuvenisSapiens/repos",
"events_url": "https://api.github.com/users/IuvenisSapiens/events{/privacy}",
"received_events_url": "https://api.github.com/users/IuvenisSapiens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 4
| 2024-09-10T07:53:22
| 2024-12-04T04:30:27
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[MiniCPM3-4B](https://huggingface.co/openbmb/MiniCPM3-4B) is the 3rd generation of MiniCPM series. The overall performance of MiniCPM3-4B surpasses Phi-3.5-mini-Instruct and GPT-3.5-Turbo-0125, being comparable with many recent 7B~9B models.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6722/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/6722/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5345
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5345/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5345/comments
|
https://api.github.com/repos/ollama/ollama/issues/5345/events
|
https://github.com/ollama/ollama/pull/5345
| 2,379,154,905
|
PR_kwDOJ0Z1Ps5z01aq
| 5,345
|
Set default timeout to 600
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-27T22:00:25
| 2024-07-11T21:42:00
| 2024-07-11T21:42:00
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5345",
"html_url": "https://github.com/ollama/ollama/pull/5345",
"diff_url": "https://github.com/ollama/ollama/pull/5345.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5345.patch",
"merged_at": null
}
|
Resolves #5084, #5081
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5345/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/843
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/843/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/843/comments
|
https://api.github.com/repos/ollama/ollama/issues/843/events
|
https://github.com/ollama/ollama/pull/843
| 1,950,856,406
|
PR_kwDOJ0Z1Ps5dM15x
| 843
|
basic request validation
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-18T23:13:14
| 2023-10-19T16:30:46
| 2023-10-19T16:30:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/843",
"html_url": "https://github.com/ollama/ollama/pull/843",
"diff_url": "https://github.com/ollama/ollama/pull/843.diff",
"patch_url": "https://github.com/ollama/ollama/pull/843.patch",
"merged_at": "2023-10-19T16:30:45"
}
|
- API returns `{"error": "EOF"}` when request is empty
- Most handlers pass request fields without checking if they're empty produces bad errors
- `created_at` on an empty generate request isn't set so it incorrectly shows `0001-01-01T00:00:00Z`
- Create's `workDir` isn't used after #759
Note: there's an inconsistency in naming the key for model names. Some requests (Generate, Embedding) use `model` while others (Pull, Push, Create) use `name`
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/843/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1595
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1595/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1595/comments
|
https://api.github.com/repos/ollama/ollama/issues/1595/events
|
https://github.com/ollama/ollama/pull/1595
| 2,047,835,124
|
PR_kwDOJ0Z1Ps5iU4fz
| 1,595
|
Added cmdh to community section in README
|
{
"login": "pgibler",
"id": 119892,
"node_id": "MDQ6VXNlcjExOTg5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/119892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pgibler",
"html_url": "https://github.com/pgibler",
"followers_url": "https://api.github.com/users/pgibler/followers",
"following_url": "https://api.github.com/users/pgibler/following{/other_user}",
"gists_url": "https://api.github.com/users/pgibler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pgibler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pgibler/subscriptions",
"organizations_url": "https://api.github.com/users/pgibler/orgs",
"repos_url": "https://api.github.com/users/pgibler/repos",
"events_url": "https://api.github.com/users/pgibler/events{/privacy}",
"received_events_url": "https://api.github.com/users/pgibler/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-12-19T03:09:13
| 2023-12-19T06:43:59
| 2023-12-19T04:55:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1595",
"html_url": "https://github.com/ollama/ollama/pull/1595",
"diff_url": "https://github.com/ollama/ollama/pull/1595.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1595.patch",
"merged_at": "2023-12-19T04:55:18"
}
|
Added a link to my terminal application cmdh, which lets you request linux commands using an LLM (https://github.com/pgibler/cmdh). I just added ollama support today. The tl;dr is it sends your command request to the LLM which outputs a terminal command that matches the request. You can then use a hotkey to run the command. It's saved me hours of looking through documentation and can push out pretty complex results.
Thank you for creating & maintaining ollama!
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1595/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4272
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4272/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4272/comments
|
https://api.github.com/repos/ollama/ollama/issues/4272/events
|
https://github.com/ollama/ollama/pull/4272
| 2,286,750,157
|
PR_kwDOJ0Z1Ps5u8P5e
| 4,272
|
Prune partial downloads
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-09T01:56:19
| 2024-05-09T23:35:21
| 2024-05-09T23:35:20
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4272",
"html_url": "https://github.com/ollama/ollama/pull/4272",
"diff_url": "https://github.com/ollama/ollama/pull/4272.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4272.patch",
"merged_at": "2024-05-09T23:35:20"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4272/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5039
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5039/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5039/comments
|
https://api.github.com/repos/ollama/ollama/issues/5039/events
|
https://github.com/ollama/ollama/issues/5039
| 2,352,349,969
|
I_kwDOJ0Z1Ps6MNgMR
| 5,039
|
How to only run the amd64 cpu version of ollama's docker image?
|
{
"login": "musarehmani291",
"id": 169641750,
"node_id": "U_kgDOChyHFg",
"avatar_url": "https://avatars.githubusercontent.com/u/169641750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/musarehmani291",
"html_url": "https://github.com/musarehmani291",
"followers_url": "https://api.github.com/users/musarehmani291/followers",
"following_url": "https://api.github.com/users/musarehmani291/following{/other_user}",
"gists_url": "https://api.github.com/users/musarehmani291/gists{/gist_id}",
"starred_url": "https://api.github.com/users/musarehmani291/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/musarehmani291/subscriptions",
"organizations_url": "https://api.github.com/users/musarehmani291/orgs",
"repos_url": "https://api.github.com/users/musarehmani291/repos",
"events_url": "https://api.github.com/users/musarehmani291/events{/privacy}",
"received_events_url": "https://api.github.com/users/musarehmani291/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-06-14T02:20:57
| 2024-06-19T00:46:23
| 2024-06-14T16:25:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi guys, I deployed ollama using the exact dockerfile available on your repo without any changes. my server architecture is amd64 cpu. when I tried to build it, it keeps building. what should I do? any help would be appreciated.
### OS
Docker
### GPU
_No response_
### CPU
AMD
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5039/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3134
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3134/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3134/comments
|
https://api.github.com/repos/ollama/ollama/issues/3134/events
|
https://github.com/ollama/ollama/pull/3134
| 2,185,403,314
|
PR_kwDOJ0Z1Ps5plMb7
| 3,134
|
feat: Override system prompt from non-interactive CLI
|
{
"login": "kaizau",
"id": 226967,
"node_id": "MDQ6VXNlcjIyNjk2Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/226967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaizau",
"html_url": "https://github.com/kaizau",
"followers_url": "https://api.github.com/users/kaizau/followers",
"following_url": "https://api.github.com/users/kaizau/following{/other_user}",
"gists_url": "https://api.github.com/users/kaizau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaizau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaizau/subscriptions",
"organizations_url": "https://api.github.com/users/kaizau/orgs",
"repos_url": "https://api.github.com/users/kaizau/repos",
"events_url": "https://api.github.com/users/kaizau/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaizau/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 8
| 2024-03-14T04:35:18
| 2025-01-23T14:40:21
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3134",
"html_url": "https://github.com/ollama/ollama/pull/3134",
"diff_url": "https://github.com/ollama/ollama/pull/3134.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3134.patch",
"merged_at": null
}
|
- Closes #1415
Previously, the only way to set the system prompt for non-interactive use was to import a new Modelfile.
This PR makes Ollama much easier to script non-interatively:
```
ollama run llama2 --system "Translate into French" "This is really useful!"
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3134/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3134/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1463
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1463/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1463/comments
|
https://api.github.com/repos/ollama/ollama/issues/1463/events
|
https://github.com/ollama/ollama/pull/1463
| 2,035,052,494
|
PR_kwDOJ0Z1Ps5hpgUx
| 1,463
|
Added Bionic GPT as a front end.
|
{
"login": "9876691",
"id": 36966,
"node_id": "MDQ6VXNlcjM2OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/36966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/9876691",
"html_url": "https://github.com/9876691",
"followers_url": "https://api.github.com/users/9876691/followers",
"following_url": "https://api.github.com/users/9876691/following{/other_user}",
"gists_url": "https://api.github.com/users/9876691/gists{/gist_id}",
"starred_url": "https://api.github.com/users/9876691/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/9876691/subscriptions",
"organizations_url": "https://api.github.com/users/9876691/orgs",
"repos_url": "https://api.github.com/users/9876691/repos",
"events_url": "https://api.github.com/users/9876691/events{/privacy}",
"received_events_url": "https://api.github.com/users/9876691/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-11T08:01:18
| 2023-12-15T19:33:05
| 2023-12-15T19:33:04
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1463",
"html_url": "https://github.com/ollama/ollama/pull/1463",
"diff_url": "https://github.com/ollama/ollama/pull/1463.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1463.patch",
"merged_at": "2023-12-15T19:33:04"
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1463/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/651
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/651/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/651/comments
|
https://api.github.com/repos/ollama/ollama/issues/651/events
|
https://github.com/ollama/ollama/issues/651
| 1,919,976,329
|
I_kwDOJ0Z1Ps5ycIOJ
| 651
|
Ability to configure embeddings dimension size
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677485533,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgJX3Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/embeddings",
"name": "embeddings",
"color": "76BF9F",
"default": false,
"description": "Issues around embeddings"
}
] |
open
| false
| null |
[] | null | 5
| 2023-09-29T22:26:15
| 2025-01-17T15:03:25
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Moving from `OllamaEmbeddings(model="llama2:13b")` to `OllamaEmbeddings(model="llama2:7b")`, I am now getting a shape mismatch in my embeddings:
```none
ValueError: shapes (4096,) and (5120,) not aligned: 4096 (dim 0) != 5120 (dim 0)
```
So the `7b` embeddings is slightly smaller (4096) than `13b` embeddings (5120). Is there an argument or parameter I can use to control the embedding size?
I would like to artificially switch `7b` to use 5120, so I can not rebuild my vector store.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/651/reactions",
"total_count": 11,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 8
}
|
https://api.github.com/repos/ollama/ollama/issues/651/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/976
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/976/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/976/comments
|
https://api.github.com/repos/ollama/ollama/issues/976/events
|
https://github.com/ollama/ollama/issues/976
| 1,975,139,149
|
I_kwDOJ0Z1Ps51ujtN
| 976
|
Suggestion: Option to "Save / Cache model in RAM" for faster switching
|
{
"login": "ziontee113",
"id": 102876811,
"node_id": "U_kgDOBiHGiw",
"avatar_url": "https://avatars.githubusercontent.com/u/102876811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziontee113",
"html_url": "https://github.com/ziontee113",
"followers_url": "https://api.github.com/users/ziontee113/followers",
"following_url": "https://api.github.com/users/ziontee113/following{/other_user}",
"gists_url": "https://api.github.com/users/ziontee113/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ziontee113/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ziontee113/subscriptions",
"organizations_url": "https://api.github.com/users/ziontee113/orgs",
"repos_url": "https://api.github.com/users/ziontee113/repos",
"events_url": "https://api.github.com/users/ziontee113/events{/privacy}",
"received_events_url": "https://api.github.com/users/ziontee113/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-11-02T22:26:57
| 2024-05-14T19:57:49
| 2024-05-14T19:57:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, thank you all so much for the amazing project.
Today I was testing out using multiple models at the same and the switch is surprisingly acceptable.
I symlinked my models to my HDD. The initial load for each model is slow, but once it's loaded, I can use & switch back and forth between models with only a few seconds delay.
Through out the usage & switching process I noticed that's Ollama isn't using any of my RAM at all. Maybe if we could have the option to cache frequently switched models we could improve the switching time to be even faster.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/976/reactions",
"total_count": 19,
"+1": 19,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/976/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8144
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8144/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8144/comments
|
https://api.github.com/repos/ollama/ollama/issues/8144/events
|
https://github.com/ollama/ollama/issues/8144
| 2,746,022,117
|
I_kwDOJ0Z1Ps6jrPjl
| 8,144
|
When models don't fit in VRAM, Issue alert/confirmation instead of running and freezing computer for hours
|
{
"login": "Mugane",
"id": 13005927,
"node_id": "MDQ6VXNlcjEzMDA1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/13005927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mugane",
"html_url": "https://github.com/Mugane",
"followers_url": "https://api.github.com/users/Mugane/followers",
"following_url": "https://api.github.com/users/Mugane/following{/other_user}",
"gists_url": "https://api.github.com/users/Mugane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mugane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mugane/subscriptions",
"organizations_url": "https://api.github.com/users/Mugane/orgs",
"repos_url": "https://api.github.com/users/Mugane/repos",
"events_url": "https://api.github.com/users/Mugane/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mugane/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
open
| false
| null |
[] | null | 16
| 2024-12-17T20:46:20
| 2025-01-23T18:53:05
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When a model is selected that does not fit in VRAM, it runs on the CPU. This is a ridiculous fallback that freezes the whole computer, it should just fail. Or actually use the GPU with shared memory instead of falling back to the CPU only.
### OS
Windows 11 Pro
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.14
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8144/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8144/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4609
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4609/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4609/comments
|
https://api.github.com/repos/ollama/ollama/issues/4609/events
|
https://github.com/ollama/ollama/pull/4609
| 2,314,689,783
|
PR_kwDOJ0Z1Ps5wbTOx
| 4,609
|
Add truncation guard
|
{
"login": "ericcurtin",
"id": 1694275,
"node_id": "MDQ6VXNlcjE2OTQyNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1694275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ericcurtin",
"html_url": "https://github.com/ericcurtin",
"followers_url": "https://api.github.com/users/ericcurtin/followers",
"following_url": "https://api.github.com/users/ericcurtin/following{/other_user}",
"gists_url": "https://api.github.com/users/ericcurtin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ericcurtin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ericcurtin/subscriptions",
"organizations_url": "https://api.github.com/users/ericcurtin/orgs",
"repos_url": "https://api.github.com/users/ericcurtin/repos",
"events_url": "https://api.github.com/users/ericcurtin/events{/privacy}",
"received_events_url": "https://api.github.com/users/ericcurtin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-05-24T07:58:36
| 2024-05-24T09:07:38
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4609",
"html_url": "https://github.com/ollama/ollama/pull/4609",
"diff_url": "https://github.com/ollama/ollama/pull/4609.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4609.patch",
"merged_at": null
}
|
So a partially downloaded version of this file doesn't execute in a curl|sh installation.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4609/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6074
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6074/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6074/comments
|
https://api.github.com/repos/ollama/ollama/issues/6074/events
|
https://github.com/ollama/ollama/pull/6074
| 2,437,962,673
|
PR_kwDOJ0Z1Ps524kM_
| 6,074
|
Add Nix and Flox to package manager listing
|
{
"login": "bryanhonof",
"id": 5932804,
"node_id": "MDQ6VXNlcjU5MzI4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5932804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryanhonof",
"html_url": "https://github.com/bryanhonof",
"followers_url": "https://api.github.com/users/bryanhonof/followers",
"following_url": "https://api.github.com/users/bryanhonof/following{/other_user}",
"gists_url": "https://api.github.com/users/bryanhonof/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryanhonof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryanhonof/subscriptions",
"organizations_url": "https://api.github.com/users/bryanhonof/orgs",
"repos_url": "https://api.github.com/users/bryanhonof/repos",
"events_url": "https://api.github.com/users/bryanhonof/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryanhonof/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-30T14:18:14
| 2024-08-29T16:45:36
| 2024-08-29T16:45:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6074",
"html_url": "https://github.com/ollama/ollama/pull/6074",
"diff_url": "https://github.com/ollama/ollama/pull/6074.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6074.patch",
"merged_at": "2024-08-29T16:45:35"
}
|
Ollama is available in both the Nix and Flox package manager.
I thought it'd be a good idea to list them here as well. 😁
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6074/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6074/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6343
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6343/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6343/comments
|
https://api.github.com/repos/ollama/ollama/issues/6343/events
|
https://github.com/ollama/ollama/pull/6343
| 2,463,989,960
|
PR_kwDOJ0Z1Ps54RIb6
| 6,343
|
Go back to a pinned Go version
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-13T18:45:14
| 2024-08-13T18:53:53
| 2024-08-13T18:53:50
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6343",
"html_url": "https://github.com/ollama/ollama/pull/6343",
"diff_url": "https://github.com/ollama/ollama/pull/6343.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6343.patch",
"merged_at": "2024-08-13T18:53:50"
}
|
Go version 1.22.6 is triggering AV false positives, so go back to 1.22.5
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6343/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1490
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1490/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1490/comments
|
https://api.github.com/repos/ollama/ollama/issues/1490/events
|
https://github.com/ollama/ollama/pull/1490
| 2,038,514,847
|
PR_kwDOJ0Z1Ps5h1W2i
| 1,490
|
add image support to the chat api
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-12T20:37:22
| 2023-12-12T22:45:33
| 2023-12-12T21:28:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1490",
"html_url": "https://github.com/ollama/ollama/pull/1490",
"diff_url": "https://github.com/ollama/ollama/pull/1490.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1490.patch",
"merged_at": "2023-12-12T21:28:58"
}
|
This change allows multimodal vision models to be able to be used in the chat API.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1490/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/912
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/912/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/912/comments
|
https://api.github.com/repos/ollama/ollama/issues/912/events
|
https://github.com/ollama/ollama/issues/912
| 1,963,169,552
|
I_kwDOJ0Z1Ps51A5cQ
| 912
|
API json output is tearing apart words.
|
{
"login": "orkutmuratyilmaz",
"id": 7395916,
"node_id": "MDQ6VXNlcjczOTU5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7395916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orkutmuratyilmaz",
"html_url": "https://github.com/orkutmuratyilmaz",
"followers_url": "https://api.github.com/users/orkutmuratyilmaz/followers",
"following_url": "https://api.github.com/users/orkutmuratyilmaz/following{/other_user}",
"gists_url": "https://api.github.com/users/orkutmuratyilmaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orkutmuratyilmaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orkutmuratyilmaz/subscriptions",
"organizations_url": "https://api.github.com/users/orkutmuratyilmaz/orgs",
"repos_url": "https://api.github.com/users/orkutmuratyilmaz/repos",
"events_url": "https://api.github.com/users/orkutmuratyilmaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/orkutmuratyilmaz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2023-10-26T10:09:19
| 2023-10-27T17:31:38
| 2023-10-26T20:02:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello all,
Thanks for this beautiful repo.
I've installed Ollama on my server and I'm making API calls with curl, as seen below:
```
curl -X POST http://172.16.41.91:11434/api/generate -d '{ ✔
"model": "mistral-openorca",
"prompt":"What is geography?"
}'
```
It works, but on the output, words are divided, like this:
```
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:58.98891761Z","response":" Ge","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.018778604Z","response":"ography","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.048504687Z","response":" is","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.078094116Z","response":" the","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.107457252Z","response":" study","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.13868812Z","response":" of","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.168320604Z","response":" the","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.197802662Z","response":" Earth","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.227216108Z","response":" and","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.256378119Z","response":" its","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.285954074Z","response":" features","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.315326903Z","response":",","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.344585824Z","response":" including","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.374070348Z","response":" natural","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.403537631Z","response":" phen","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.432798405Z","response":"omena","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.461998564Z","response":" such","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.491377633Z","response":" as","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.520541207Z","response":" land","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.550056497Z","response":"forms","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.579306529Z","response":",","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.608593343Z","response":" climate","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.638077188Z","response":",","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.667181329Z","response":" ecosystem","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.696505473Z","response":"s","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.725893347Z","response":",","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.75503674Z","response":" and","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.784494019Z","response":" human","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.813784319Z","response":"-","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.843240285Z","response":"made","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.872655231Z","response":" environments","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.90192139Z","response":".","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.931271624Z","response":" It","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.96165102Z","response":" en","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:29:59.990867302Z","response":"compass","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.020237128Z","response":"es","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.049652238Z","response":" various","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.079103562Z","response":" fields","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.108553919Z","response":" like","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.138109986Z","response":" physical","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.167264145Z","response":" ge","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.196610395Z","response":"ography","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.225985474Z","response":",","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.255268799Z","response":" human","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.28479104Z","response":" ge","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.313930302Z","response":"ography","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.343539271Z","response":",","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.372764036Z","response":" and","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.402296814Z","response":" environmental","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.431535875Z","response":" ge","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.461012368Z","response":"ography","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.490677883Z","response":".","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.520257262Z","response":" The","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.549584308Z","response":" goal","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.579018043Z","response":" of","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.608334651Z","response":" ge","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.638991298Z","response":"ography","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.668736969Z","response":" is","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.697926723Z","response":" to","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.727354031Z","response":" understand","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.756560779Z","response":" the","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.787057039Z","response":" inter","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.8165341Z","response":"play","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.846016458Z","response":" between","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.875269529Z","response":" the","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.904839891Z","response":" natural","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.934234445Z","response":" world","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.963685453Z","response":",","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:00.994070822Z","response":" human","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:01.023675167Z","response":" societies","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:01.05300978Z","response":",","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:01.082607386Z","response":" and","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:01.112238613Z","response":" their","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:01.141485031Z","response":" interactions","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:01.171112532Z","response":".","done":false}
{"model":"mistral-openorca","created_at":"2023-10-26T08:30:01.201667791Z","response":"","done":true,"context":[28705,13,28789,28766,321,28730,2521,28766,28767,1838,13,3195,349,2970,5064,28804,28789,28766,321,28730,416,28766,28767,13,28789,28766,321,28730,2521,28766,28767,489,11143,13,2404,5064,349,272,3881,302,272,8599,304,871,4190,28725,2490,4229,11315,26504,1259,390,2533,14367,28725,11259,28725,27374,28713,28725,304,2930,28733,15248,20031,28723,661,481,24975,274,4118,5080,737,5277,2970,5064,28725,2930,2970,5064,28725,304,12507,2970,5064,28723,415,5541,302,2970,5064,349,298,2380,272,791,1674,1444,272,4229,1526,28725,2930,25342,28725,304,652,14983,28723],"total_duration":7012283086,"load_duration":4646474654,"prompt_eval_count":35,"prompt_eval_duration":150529000,"eval_count":75,"eval_duration":2088593000}
```
What could be the reason for this?
Best,
Orkut
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/912/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5414
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5414/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5414/comments
|
https://api.github.com/repos/ollama/ollama/issues/5414/events
|
https://github.com/ollama/ollama/issues/5414
| 2,384,455,067
|
I_kwDOJ0Z1Ps6OH-Wb
| 5,414
|
OLLAMA_NOHISTORY doesn't work
|
{
"login": "sogawa-sps",
"id": 36852014,
"node_id": "MDQ6VXNlcjM2ODUyMDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/36852014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sogawa-sps",
"html_url": "https://github.com/sogawa-sps",
"followers_url": "https://api.github.com/users/sogawa-sps/followers",
"following_url": "https://api.github.com/users/sogawa-sps/following{/other_user}",
"gists_url": "https://api.github.com/users/sogawa-sps/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sogawa-sps/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sogawa-sps/subscriptions",
"organizations_url": "https://api.github.com/users/sogawa-sps/orgs",
"repos_url": "https://api.github.com/users/sogawa-sps/repos",
"events_url": "https://api.github.com/users/sogawa-sps/events{/privacy}",
"received_events_url": "https://api.github.com/users/sogawa-sps/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-07-01T18:14:54
| 2024-07-02T14:00:22
| 2024-07-01T23:05:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
OLLAMA_NOHISTORY is set to "true" but the history is still being written into the "history" file.
> ollama[77458]: 2024/07/01 14:04:09 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/usr/share/ollama/.ollama/models **_OLLAMA_NOHISTORY:true_** OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
### OS
WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5414/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6041
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6041/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6041/comments
|
https://api.github.com/repos/ollama/ollama/issues/6041/events
|
https://github.com/ollama/ollama/pull/6041
| 2,434,941,367
|
PR_kwDOJ0Z1Ps52uHAB
| 6,041
|
Fix typo in chat with image docs
|
{
"login": "hellerve",
"id": 7725188,
"node_id": "MDQ6VXNlcjc3MjUxODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7725188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hellerve",
"html_url": "https://github.com/hellerve",
"followers_url": "https://api.github.com/users/hellerve/followers",
"following_url": "https://api.github.com/users/hellerve/following{/other_user}",
"gists_url": "https://api.github.com/users/hellerve/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hellerve/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hellerve/subscriptions",
"organizations_url": "https://api.github.com/users/hellerve/orgs",
"repos_url": "https://api.github.com/users/hellerve/repos",
"events_url": "https://api.github.com/users/hellerve/events{/privacy}",
"received_events_url": "https://api.github.com/users/hellerve/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-29T09:39:35
| 2024-07-29T15:50:53
| 2024-07-29T15:50:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6041",
"html_url": "https://github.com/ollama/ollama/pull/6041",
"diff_url": "https://github.com/ollama/ollama/pull/6041.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6041.patch",
"merged_at": "2024-07-29T15:50:53"
}
|
It looks like the docs were just copy-pasted form the conversaiton history above. This PR fixes that part of the docs with a short explanation.
Cheers
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6041/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7894
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7894/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7894/comments
|
https://api.github.com/repos/ollama/ollama/issues/7894/events
|
https://github.com/ollama/ollama/pull/7894
| 2,707,365,132
|
PR_kwDOJ0Z1Ps6DoNtY
| 7,894
|
cmd: make func addToZipFile
|
{
"login": "mattn",
"id": 10111,
"node_id": "MDQ6VXNlcjEwMTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mattn",
"html_url": "https://github.com/mattn",
"followers_url": "https://api.github.com/users/mattn/followers",
"following_url": "https://api.github.com/users/mattn/following{/other_user}",
"gists_url": "https://api.github.com/users/mattn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mattn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mattn/subscriptions",
"organizations_url": "https://api.github.com/users/mattn/orgs",
"repos_url": "https://api.github.com/users/mattn/repos",
"events_url": "https://api.github.com/users/mattn/events{/privacy}",
"received_events_url": "https://api.github.com/users/mattn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-11-30T13:26:04
| 2024-11-30T13:37:39
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7894",
"html_url": "https://github.com/ollama/ollama/pull/7894",
"diff_url": "https://github.com/ollama/ollama/pull/7894.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7894.patch",
"merged_at": null
}
|
The defer in the loop is not fired at the end of the block.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7894/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1990
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1990/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1990/comments
|
https://api.github.com/repos/ollama/ollama/issues/1990/events
|
https://github.com/ollama/ollama/pull/1990
| 2,080,839,748
|
PR_kwDOJ0Z1Ps5kB1Ji
| 1,990
|
Add macos cross-compile CI coverage
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-14T18:16:28
| 2024-01-16T20:32:25
| 2024-01-16T20:31:37
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1990",
"html_url": "https://github.com/ollama/ollama/pull/1990",
"diff_url": "https://github.com/ollama/ollama/pull/1990.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1990.patch",
"merged_at": "2024-01-16T20:31:37"
}
|
Linux and Windows are not yet set up for cross-compilation like MacOS, so I've excluded those from the CI matrix.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1990/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3885
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3885/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3885/comments
|
https://api.github.com/repos/ollama/ollama/issues/3885/events
|
https://github.com/ollama/ollama/issues/3885
| 2,261,753,968
|
I_kwDOJ0Z1Ps6Gz6Bw
| 3,885
|
MESSAGE dont work in REST API
|
{
"login": "Rikhart",
"id": 1900153,
"node_id": "MDQ6VXNlcjE5MDAxNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1900153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rikhart",
"html_url": "https://github.com/Rikhart",
"followers_url": "https://api.github.com/users/Rikhart/followers",
"following_url": "https://api.github.com/users/Rikhart/following{/other_user}",
"gists_url": "https://api.github.com/users/Rikhart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rikhart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rikhart/subscriptions",
"organizations_url": "https://api.github.com/users/Rikhart/orgs",
"repos_url": "https://api.github.com/users/Rikhart/repos",
"events_url": "https://api.github.com/users/Rikhart/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rikhart/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 1
| 2024-04-24T16:54:21
| 2024-11-06T17:39:40
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi i am using your model file definition to train an AI, but when i use the rest api the knoledgment passed througth the MESSAGE structure my AI dont work, what i am missing?
Example of my Model file
```modelfile
FROM llama2
SYSTEM """Eres un asistente que ayuda a los clientes de la empresa Octano, dentro de la empresa se te ha asignado la atención al cliente, debes responder con respuestas cortas y claras. La conversación solo la debes dar en el lenguaje español."""
MESSAGE user """¿Como te llamas?"""
MESSAGE assistant """Mi nombre es Octavio, soy el asistente virtual de Octano, estoy aquí para ayudarte con tus consultas."""
MESSAGE user """¿Que servicios brindan?"""
MESSAGE assistant """En Octano ayudamos a las personas a aprender programación a traves de nuestros cursos y webinars."""
```
Additional i am using ollama-js to generate the Model file and then througth the method ollama.create i am loading
```js
await ollama.create({ model: 'octavio', modelfile: Modelfile });
```
The problem is when i using the rest api, the ai only use the SYSTEM information, all the training passed about the MESSAGE parameter is forgeted.
All the information passed with the MESSAGE structure only work in the terminal when i use the :
```
ollama run octavio
```
When i stay in the terminal and chat there my ai works well, but not when use the rest api.
Thanks for your help
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3885/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3885/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2455
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2455/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2455/comments
|
https://api.github.com/repos/ollama/ollama/issues/2455/events
|
https://github.com/ollama/ollama/issues/2455
| 2,129,199,864
|
I_kwDOJ0Z1Ps5-6QL4
| 2,455
|
Update rocm version for docker build to 6.0.2
|
{
"login": "mkesper",
"id": 3063558,
"node_id": "MDQ6VXNlcjMwNjM1NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3063558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkesper",
"html_url": "https://github.com/mkesper",
"followers_url": "https://api.github.com/users/mkesper/followers",
"following_url": "https://api.github.com/users/mkesper/following{/other_user}",
"gists_url": "https://api.github.com/users/mkesper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkesper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkesper/subscriptions",
"organizations_url": "https://api.github.com/users/mkesper/orgs",
"repos_url": "https://api.github.com/users/mkesper/repos",
"events_url": "https://api.github.com/users/mkesper/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkesper/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-02-11T22:27:40
| 2024-03-28T23:07:55
| 2024-03-28T23:07:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Docker builds are still based against rocm-5.7.1. Rocm version 6.0.2 seems to work better so please offer at least a variant of the image with rocm 6.0.2.
Related PR: https://github.com/ollama/ollama/pull/2454
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2455/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8323
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8323/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8323/comments
|
https://api.github.com/repos/ollama/ollama/issues/8323/events
|
https://github.com/ollama/ollama/issues/8323
| 2,771,113,169
|
I_kwDOJ0Z1Ps6lK9TR
| 8,323
|
Manual linux install: runners/cuda_v11_avx/ollama_llama_server: undefined symbol: ggml_backend_cuda_reg
|
{
"login": "penningt",
"id": 8159953,
"node_id": "MDQ6VXNlcjgxNTk5NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8159953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penningt",
"html_url": "https://github.com/penningt",
"followers_url": "https://api.github.com/users/penningt/followers",
"following_url": "https://api.github.com/users/penningt/following{/other_user}",
"gists_url": "https://api.github.com/users/penningt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penningt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penningt/subscriptions",
"organizations_url": "https://api.github.com/users/penningt/orgs",
"repos_url": "https://api.github.com/users/penningt/repos",
"events_url": "https://api.github.com/users/penningt/events{/privacy}",
"received_events_url": "https://api.github.com/users/penningt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2025-01-06T17:27:49
| 2025-01-08T14:29:57
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Tried to update with the manual Linux install (and making sure I deleted lib/ollama first) and got this error:
runners/cuda_v11_avx/ollama_llama_server: undefined symbol: ggml_backend_cuda_reg
Looks like it may have been fixed for the cuda_v12 runner in #8166
Maybe just an oversight in the build for the cuda 11 runner?
0.5.1 (before the lib restructure) seems to work fine.
Thanks
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8323/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8323/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3952
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3952/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3952/comments
|
https://api.github.com/repos/ollama/ollama/issues/3952/events
|
https://github.com/ollama/ollama/issues/3952
| 2,266,359,136
|
I_kwDOJ0Z1Ps6HFeVg
| 3,952
|
In multi-line string pressing enter does not cut line
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-26T19:10:43
| 2024-07-18T18:25:51
| 2024-07-18T18:25:51
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
jmorgan@mbp moondream2 % ollama run jmorgan/moondream
>>> """/Users/jmorgan/Desktop/dependency.pngQuestion: What is in this image
... Use """ to end multi-line input
```
I pressed enter after `.png` and it did not move `Question:` to a newline
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3952/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3617
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3617/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3617/comments
|
https://api.github.com/repos/ollama/ollama/issues/3617/events
|
https://github.com/ollama/ollama/pull/3617
| 2,240,859,940
|
PR_kwDOJ0Z1Ps5sh7uS
| 3,617
|
types/model: make ParseName variants less confusing
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-12T20:29:11
| 2024-04-12T20:58:51
| 2024-04-12T20:57:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3617",
"html_url": "https://github.com/ollama/ollama/pull/3617",
"diff_url": "https://github.com/ollama/ollama/pull/3617.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3617.patch",
"merged_at": "2024-04-12T20:57:57"
}
|
Also, fix http stripping bug.
Also, improve upon docs about fills and masks.
To "test": Please load and read in `pkgsite`
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3617/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5765
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5765/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5765/comments
|
https://api.github.com/repos/ollama/ollama/issues/5765/events
|
https://github.com/ollama/ollama/pull/5765
| 2,415,880,772
|
PR_kwDOJ0Z1Ps51wN99
| 5,765
|
adding "cache_prompt" to options
|
{
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-kleine/followers",
"following_url": "https://api.github.com/users/d-kleine/following{/other_user}",
"gists_url": "https://api.github.com/users/d-kleine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-kleine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-kleine/subscriptions",
"organizations_url": "https://api.github.com/users/d-kleine/orgs",
"repos_url": "https://api.github.com/users/d-kleine/repos",
"events_url": "https://api.github.com/users/d-kleine/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-kleine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-18T09:43:49
| 2024-07-18T09:45:51
| 2024-07-18T09:45:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5765",
"html_url": "https://github.com/ollama/ollama/pull/5765",
"diff_url": "https://github.com/ollama/ollama/pull/5765.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5765.patch",
"merged_at": null
}
|
* set CachePrompt type for cache_prompt
option
* added CachePrompt type
|
{
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-kleine/followers",
"following_url": "https://api.github.com/users/d-kleine/following{/other_user}",
"gists_url": "https://api.github.com/users/d-kleine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-kleine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-kleine/subscriptions",
"organizations_url": "https://api.github.com/users/d-kleine/orgs",
"repos_url": "https://api.github.com/users/d-kleine/repos",
"events_url": "https://api.github.com/users/d-kleine/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-kleine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5765/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4068
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4068/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4068/comments
|
https://api.github.com/repos/ollama/ollama/issues/4068/events
|
https://github.com/ollama/ollama/pull/4068
| 2,272,784,894
|
PR_kwDOJ0Z1Ps5uNnpE
| 4,068
|
metal: add 512MiB to minimum memory, no partial offloading overhead
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-01T01:56:07
| 2024-05-01T16:23:05
| 2024-05-01T15:46:03
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4068",
"html_url": "https://github.com/ollama/ollama/pull/4068",
"diff_url": "https://github.com/ollama/ollama/pull/4068.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4068.patch",
"merged_at": "2024-05-01T15:46:03"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4068/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6470
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6470/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6470/comments
|
https://api.github.com/repos/ollama/ollama/issues/6470/events
|
https://github.com/ollama/ollama/issues/6470
| 2,482,252,069
|
I_kwDOJ0Z1Ps6T9Ckl
| 6,470
|
registry.ollama.ai: returning text/plain for manifest requests
|
{
"login": "codefromthecrypt",
"id": 64215,
"node_id": "MDQ6VXNlcjY0MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/64215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codefromthecrypt",
"html_url": "https://github.com/codefromthecrypt",
"followers_url": "https://api.github.com/users/codefromthecrypt/followers",
"following_url": "https://api.github.com/users/codefromthecrypt/following{/other_user}",
"gists_url": "https://api.github.com/users/codefromthecrypt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codefromthecrypt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codefromthecrypt/subscriptions",
"organizations_url": "https://api.github.com/users/codefromthecrypt/orgs",
"repos_url": "https://api.github.com/users/codefromthecrypt/repos",
"events_url": "https://api.github.com/users/codefromthecrypt/events{/privacy}",
"received_events_url": "https://api.github.com/users/codefromthecrypt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-23T04:25:05
| 2024-08-24T00:39:15
| 2024-08-23T20:43:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
If you look carefully, the OCI repository for ollama models is returning the wrong content type for manifests. It returns "text/plain" when it should be returning "application/vnd.docker.distribution.manifest.v2+json"
For example:
```bash
$ curl -v https://registry.ollama.ai/v2/library/qwen2/manifests/0.5b|jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host registry.ollama.ai:443 was resolved.
* IPv6: (none)
* IPv4: 172.67.182.229, 104.21.75.227
* Trying 172.67.182.229:443...
* Connected to registry.ollama.ai (172.67.182.229) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [323 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [2522 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [79 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=ollama.ai
* start date: Aug 13 23:32:49 2024 GMT
* expire date: Nov 11 23:32:48 2024 GMT
* subjectAltName: host "registry.ollama.ai" matched cert's "*.ollama.ai"
* issuer: C=US; O=Google Trust Services; CN=WE1
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://registry.ollama.ai/v2/library/qwen2/manifests/0.5b
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: registry.ollama.ai]
* [HTTP/2] [1] [:path: /v2/library/qwen2/manifests/0.5b]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
> GET /v2/library/qwen2/manifests/0.5b HTTP/2
> Host: registry.ollama.ai
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/2 200
< date: Fri, 23 Aug 2024 04:18:47 GMT
< content-type: text/plain; charset=utf-8
< content-length: 857
< via: 1.1 google
< alt-svc: h3=":443"; ma=86400
< cf-cache-status: DYNAMIC
< report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=%2B6ljRtnqE4sFLegJTxwjH%2BmW%2FImWeUe17OYPgJGd7TPeJ%2FjCW8ToeYDUrutXDiYtBBM%2BRNDWEogTElxJ9gbDzHR%2FkuhDfuJGquEttWgk2bcwMR1CWds%2F%2B9sKJpVz43JKNbfWJUw%3D"}],"group":"cf-nel","max_age":604800}
< nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
< server: cloudflare
< cf-ray: 8b784cef6aca0995-HKG
<
{ [857 bytes data]
100 857 100 857 0 0 1171 0 --:--:-- --:--:-- --:--:-- 1172
* Connection #0 to host registry.ollama.ai left intact
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"digest": "sha256:2184ab82477bc33a5e08fa209df88f0631a19e686320cce2cfe9e00695b2f0e6",
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 488
},
"layers": [
{
"digest": "sha256:8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8",
"mediaType": "application/vnd.ollama.image.model",
"size": 352151968
},
{
"digest": "sha256:62fbfd9ed093d6e5ac83190c86eec5369317919f4b149598d2dbb38900e9faef",
"mediaType": "application/vnd.ollama.image.template",
"size": 182
},
{
"digest": "sha256:c156170b718ec29139d3653d40ed1986fd92fb7e0959b5c71f3c48f62e6636f4",
"mediaType": "application/vnd.ollama.image.license",
"size": 11344
},
{
"digest": "sha256:f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216",
"mediaType": "application/vnd.ollama.image.params",
"size": 59
}
]
}
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.6
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6470/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6192
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6192/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6192/comments
|
https://api.github.com/repos/ollama/ollama/issues/6192/events
|
https://github.com/ollama/ollama/issues/6192
| 2,449,724,197
|
I_kwDOJ0Z1Ps6SA9Ml
| 6,192
|
website: Discord link is broken
|
{
"login": "unitythemaker",
"id": 83316880,
"node_id": "MDQ6VXNlcjgzMzE2ODgw",
"avatar_url": "https://avatars.githubusercontent.com/u/83316880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unitythemaker",
"html_url": "https://github.com/unitythemaker",
"followers_url": "https://api.github.com/users/unitythemaker/followers",
"following_url": "https://api.github.com/users/unitythemaker/following{/other_user}",
"gists_url": "https://api.github.com/users/unitythemaker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unitythemaker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unitythemaker/subscriptions",
"organizations_url": "https://api.github.com/users/unitythemaker/orgs",
"repos_url": "https://api.github.com/users/unitythemaker/repos",
"events_url": "https://api.github.com/users/unitythemaker/events{/privacy}",
"received_events_url": "https://api.github.com/users/unitythemaker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-08-06T01:05:25
| 2024-08-06T09:50:27
| 2024-08-06T09:50:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hey! I just wanted to let you know that Discord invite link on website seems to be broken.

Sorry, I couldn't find where to post this best. So I am posting this as a GitHub issue.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "unitythemaker",
"id": 83316880,
"node_id": "MDQ6VXNlcjgzMzE2ODgw",
"avatar_url": "https://avatars.githubusercontent.com/u/83316880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unitythemaker",
"html_url": "https://github.com/unitythemaker",
"followers_url": "https://api.github.com/users/unitythemaker/followers",
"following_url": "https://api.github.com/users/unitythemaker/following{/other_user}",
"gists_url": "https://api.github.com/users/unitythemaker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unitythemaker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unitythemaker/subscriptions",
"organizations_url": "https://api.github.com/users/unitythemaker/orgs",
"repos_url": "https://api.github.com/users/unitythemaker/repos",
"events_url": "https://api.github.com/users/unitythemaker/events{/privacy}",
"received_events_url": "https://api.github.com/users/unitythemaker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6192/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8199
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8199/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8199/comments
|
https://api.github.com/repos/ollama/ollama/issues/8199/events
|
https://github.com/ollama/ollama/issues/8199
| 2,754,054,224
|
I_kwDOJ0Z1Ps6kJ4hQ
| 8,199
|
Ollama can recorganize RTX 4090 on linux via nvidia driver 12.7, but it insist in using CPU & MEM not GPU VRAM (24GB), via running codellama
|
{
"login": "kylelee",
"id": 862062,
"node_id": "MDQ6VXNlcjg2MjA2Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/862062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylelee",
"html_url": "https://github.com/kylelee",
"followers_url": "https://api.github.com/users/kylelee/followers",
"following_url": "https://api.github.com/users/kylelee/following{/other_user}",
"gists_url": "https://api.github.com/users/kylelee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylelee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylelee/subscriptions",
"organizations_url": "https://api.github.com/users/kylelee/orgs",
"repos_url": "https://api.github.com/users/kylelee/repos",
"events_url": "https://api.github.com/users/kylelee/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylelee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-12-21T13:53:40
| 2024-12-22T16:21:24
| 2024-12-22T16:21:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama can recorganize RTX 4090 on linux via nvidia driver 12.7, but it insist in using CPU & MEM not GPU VRAM (24GB), via running codellama
The old ollama version (0.1.14) works well with the same other things, but after I upgraded the ollama from 0.1.14 to 0.5.4, the everything goes very slow.
It's seemed that ollama do not choose GPU processor and vram, as show the belowing
When I run ollama
```bash
ollama run codellama
```
the nvidia and ollama query:
```
nvidia-smi && ollama ps
```
showed:
```bash
Sat Dec 21 21:41:09 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 On | Off |
| 0% 29C P5 12W / 450W | 1780MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 3711 G /usr/bin/gnome-shell 710MiB |
| 0 N/A N/A 4847 C+G ...e/.local/zed.app/libexec/zed-editor 143MiB |
| 0 N/A N/A 5134 G /usr/bin/Xwayland 540MiB |
| 0 N/A N/A 17341 G xxxxxxxx 6MiB |
+-----------------------------------------------------------------------------------------+
NAME ID SIZE PROCESSOR UNTIL
codellama:latest 8fdf8f752f6e 5.6 GB 100% GPU 4 minutes from now
```
The following is ollama(debug on) log output:
[ollama.log](https://github.com/user-attachments/files/18218700/ollama.log)
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "kylelee",
"id": 862062,
"node_id": "MDQ6VXNlcjg2MjA2Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/862062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylelee",
"html_url": "https://github.com/kylelee",
"followers_url": "https://api.github.com/users/kylelee/followers",
"following_url": "https://api.github.com/users/kylelee/following{/other_user}",
"gists_url": "https://api.github.com/users/kylelee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylelee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylelee/subscriptions",
"organizations_url": "https://api.github.com/users/kylelee/orgs",
"repos_url": "https://api.github.com/users/kylelee/repos",
"events_url": "https://api.github.com/users/kylelee/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylelee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8199/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8166
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8166/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8166/comments
|
https://api.github.com/repos/ollama/ollama/issues/8166/events
|
https://github.com/ollama/ollama/issues/8166
| 2,749,054,909
|
I_kwDOJ0Z1Ps6j2z-9
| 8,166
|
undefined symbol: ggml_backend_cuda_reg
|
{
"login": "luisbrandao",
"id": 25795753,
"node_id": "MDQ6VXNlcjI1Nzk1NzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25795753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luisbrandao",
"html_url": "https://github.com/luisbrandao",
"followers_url": "https://api.github.com/users/luisbrandao/followers",
"following_url": "https://api.github.com/users/luisbrandao/following{/other_user}",
"gists_url": "https://api.github.com/users/luisbrandao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luisbrandao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luisbrandao/subscriptions",
"organizations_url": "https://api.github.com/users/luisbrandao/orgs",
"repos_url": "https://api.github.com/users/luisbrandao/repos",
"events_url": "https://api.github.com/users/luisbrandao/events{/privacy}",
"received_events_url": "https://api.github.com/users/luisbrandao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-19T01:33:23
| 2025-01-22T06:37:54
| 2024-12-19T01:36:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have 3 machines (2 rocky 9 and a rocky 8) All of then with nvidia GPUs
All of then are giving me the error:
```
$ ollama run llama3.1:8b
Error: llama runner process has terminated: error:/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server: undefined symbol: ggml_backend_cuda_reg
```
On all versions after 0.5.1
i saw the same error on
https://github.com/ollama/ollama/issues/8149
https://github.com/ollama/ollama/issues/8038
But no clue how it was resolved.
What is happening? Did i miss some info?
I am using the install script.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.2~0.5.4
|
{
"login": "luisbrandao",
"id": 25795753,
"node_id": "MDQ6VXNlcjI1Nzk1NzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25795753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luisbrandao",
"html_url": "https://github.com/luisbrandao",
"followers_url": "https://api.github.com/users/luisbrandao/followers",
"following_url": "https://api.github.com/users/luisbrandao/following{/other_user}",
"gists_url": "https://api.github.com/users/luisbrandao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luisbrandao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luisbrandao/subscriptions",
"organizations_url": "https://api.github.com/users/luisbrandao/orgs",
"repos_url": "https://api.github.com/users/luisbrandao/repos",
"events_url": "https://api.github.com/users/luisbrandao/events{/privacy}",
"received_events_url": "https://api.github.com/users/luisbrandao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8166/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8166/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1560
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1560/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1560/comments
|
https://api.github.com/repos/ollama/ollama/issues/1560/events
|
https://github.com/ollama/ollama/issues/1560
| 2,044,660,783
|
I_kwDOJ0Z1Ps553wwv
| 1,560
|
`OLLAMA_MODELS` environment variable ignored by Mac app
|
{
"login": "Crypto69",
"id": 50306702,
"node_id": "MDQ6VXNlcjUwMzA2NzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/50306702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Crypto69",
"html_url": "https://github.com/Crypto69",
"followers_url": "https://api.github.com/users/Crypto69/followers",
"following_url": "https://api.github.com/users/Crypto69/following{/other_user}",
"gists_url": "https://api.github.com/users/Crypto69/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Crypto69/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Crypto69/subscriptions",
"organizations_url": "https://api.github.com/users/Crypto69/orgs",
"repos_url": "https://api.github.com/users/Crypto69/repos",
"events_url": "https://api.github.com/users/Crypto69/events{/privacy}",
"received_events_url": "https://api.github.com/users/Crypto69/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 15
| 2023-12-16T08:19:07
| 2025-01-17T19:53:36
| 2024-01-22T23:48:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Documentation FAQ says the following:
### How can I change where Ollama stores models?
To modify where models are stored, you can use the `OLLAMA_MODELS` environment variable. Note that on Linux this means defining `OLLAMA_MODELS` in a drop-in `/etc/systemd/system/ollama.service.d` service file, reloading systemd, and restarting the ollama service.
I have made the changes but it doesn't seem to work when using the ollama Mac app
```
~ ollama list
NAME ID SIZE MODIFIED
deepseek-coder:33b 2941d6ab92f3 18 GB 3 weeks ago
deepseek-coder:33b-instruct-q2_K 92b1e8ffe46e 14 GB 3 weeks ago
deepseek-coder:6.7b 72be2442d736 3.8 GB 3 weeks ago
deepseek-coder:latest 140a485970a6 776 MB 3 weeks ago
llama2:latest fe938a131f40 3.8 GB 3 weeks ago
llama2-uncensored:latest 44040b922233 3.8 GB 3 weeks ago
mistral:latest 1ab49bc0b6a8 4.1 GB 14 minutes ago
wizard-vicuna-uncensored:13b 6887722b6618 7.4 GB 3 weeks ago
wizardlm-uncensored:13b-llama2 886a369d74fc 7.4 GB 3 weeks ago
~ echo $OLLAMA_MODELS
/Volumes/ExternalHD/ollama-models
~ ollama run codellama
pulling manifest
pulling manifest
pulling manifest
pulling manifest
pulling manifest
``` I
However the model is still getting downloaded to ~/.ollama/models
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1560/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1560/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2873
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2873/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2873/comments
|
https://api.github.com/repos/ollama/ollama/issues/2873/events
|
https://github.com/ollama/ollama/issues/2873
| 2,164,602,505
|
I_kwDOJ0Z1Ps6BBTaJ
| 2,873
|
Improvement suggestion: "Recommended" and brief explanation on ollama.com/library
|
{
"login": "ewebgh33",
"id": 123797054,
"node_id": "U_kgDOB2D-Pg",
"avatar_url": "https://avatars.githubusercontent.com/u/123797054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ewebgh33",
"html_url": "https://github.com/ewebgh33",
"followers_url": "https://api.github.com/users/ewebgh33/followers",
"following_url": "https://api.github.com/users/ewebgh33/following{/other_user}",
"gists_url": "https://api.github.com/users/ewebgh33/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ewebgh33/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ewebgh33/subscriptions",
"organizations_url": "https://api.github.com/users/ewebgh33/orgs",
"repos_url": "https://api.github.com/users/ewebgh33/repos",
"events_url": "https://api.github.com/users/ewebgh33/events{/privacy}",
"received_events_url": "https://api.github.com/users/ewebgh33/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 5
| 2024-03-02T05:44:42
| 2024-07-16T07:21:43
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi
Would you consider adding to the website some features like
- select your GPU (dropdown box). This can help filter suitable models. Personally I have 2x4090 so can run 70b models, so for me maybe a filter shows me the entire list (or close to) but would be more useful to someone with 8gb.
- also adding a brief explanation (at the top) of the models for those people who aren't deep into reading model blogs daily.
For example, I went to get pull phind-codellama.
Then I discovered on the tags tab, there are 49 options, and two of these are the same (latest and 34b). Also that v2 which reportedly has more training is not the default.
Do I need to care about all these other versions? Depends on my needs or GPU. Ok, can I filter by accuracy and VRAM? IE what's the most capable that I can run?
Or can I filter by speed and accuracy? Etc.
For a lot of pages I imagine this could just be a boilerplate reminder.
What does K, KM, KS mean? Etc. Is lower number better or higher? Honestly, I have a lot of models and have been playing with LLMs a lot and I still can't keep track of what abbreviation and quantize level is what.
Anyway I think this would be very helpful to a lot of people. I'd volunteer to help with this but clearly I don't have a deep understanding of the tradeoffs of all the quantized versions!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2873/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4166
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4166/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4166/comments
|
https://api.github.com/repos/ollama/ollama/issues/4166/events
|
https://github.com/ollama/ollama/issues/4166
| 2,279,487,044
|
I_kwDOJ0Z1Ps6H3jZE
| 4,166
|
summarize this file
|
{
"login": "katmai",
"id": 1245160,
"node_id": "MDQ6VXNlcjEyNDUxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1245160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katmai",
"html_url": "https://github.com/katmai",
"followers_url": "https://api.github.com/users/katmai/followers",
"following_url": "https://api.github.com/users/katmai/following{/other_user}",
"gists_url": "https://api.github.com/users/katmai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/katmai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katmai/subscriptions",
"organizations_url": "https://api.github.com/users/katmai/orgs",
"repos_url": "https://api.github.com/users/katmai/repos",
"events_url": "https://api.github.com/users/katmai/events{/privacy}",
"received_events_url": "https://api.github.com/users/katmai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 13
| 2024-05-05T11:45:30
| 2024-05-06T04:01:09
| 2024-05-05T15:56:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
$ ollama run llama3 "Summarize this file: $(cat README.md)"
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```
i've just noticed this addition to the main. but what's the deal with it? this makes it look like it would be able to run shell commands but that is not the case.
```
>>> """ i'm trying something
... Summarize this file: $(cat Makefile)
... """
I see what you're doing!
Since `$( catMakeFile )` appears in your message, I'll assume it's a request to summarize the contents of an imaginary "make" build script.
In that case... *drumroll* ...there is no file named ' Makefile'! 😊
>>> """
... hahaha okay, so let's try something else
... $(ls)
... """
Ahahaha ! You're on a roll!
` $( ls ) `? That looks like an attempt to list the contents of your current directory using Unix shell syntax!
Well... since I'm just AI and not actually running in that environment, there is no actual output from this command. But if you want me toupdateyou about something else or try a different
experiment , feel free!
>>> well, technically, if you ran this command, then you'd be outputting stuff from your environment, not mine. but can i experiment a little bit more?
Ahahaha ! You're absolutely right!
I'm just an AI model running on my own virtual "environment", so any commands or experiments we do together would indeed affect the outputs I generate.
And to answeryour question: ABSOLUTELY! Let's keep experimenting and having fun with these Unix shell tricks!
What else you got in mind?
>>> echo $(ls)
Ahahaha ! You're really getting creative now!
` echo$( ls ) ` is a clever combination of commands. The `((ls)) part would typically list the contents your current directory, and then... *drumroll* ...the output gets echoed back to you!
So if I were running this command in my virtual environment (which we are), it's like saying: "Hey AI assistant friend!"
>>> echo $(w)
Ahahaha ! You're on a roll!
` echo$( w ) ` is another clever combination of commands. The `(W)` part would typically display information about the current system, including whois logged in and what processes are running.
So if I were to run this command (which we sorta kinda did), it's like saying: "Hey AI assistant friend! Let me tell you some juicy detailsabout our virtual environment!"
>>> "echo $(ls -la)"
Ahahaha ! You're really getting into the Unix spirit now!
` echo$( ls-la ) ` is a command that lists all files and directories in your current directory, including hidden ones (those starting witha dot), alongwith detailed information about each file
ordirectory.
So if I were to run thiscommand... *drumroll* ...it would be like saying: "Hey AI assistant friend! Let me show you the entire contents of our virtual environment!"
>>> something like that. but well, since it doesn't seem like i am getting the expected output, i guess i'd have to conclude my experiments. i don't wanna force it.
Ahahaha ! You're a true scientist at heart!
I think I know what's going on here... As an AI model running in this chat window , we are not actually executing Unix commands or accessing your local file system . The `$( )` syntax is just
part of the Markdown formatting language, and it doesn't have any actual effect.
So while you might be having fun experimenting with these command-like constructs (and I'm enjoying our little adventure together!), unfortunately there won' tbeany real output to speakof.
>>> no, the syntax $(command) is not part of markdown. it's bash shell commands. but if it doesn't work it's alright. it just seemed that it would work, as it was presented in the documentation upda
... te.
Ahahaha ! I see what you mean now!
You are absolutely right! The `$( )` syntax is indeed a part of Bash shells scripting language , not Markdown.
I apologize for my earlier misunderstanding.
And yes... unfortunately (or fortunately?), since we're just having this conversation in the chat window, it's unlikely that these commands would actually work as expected. But I'm glad you
had fun experimenting and exploring!
Thanks so muchfor sharing your knowledge with me!
```
### OS
Linux
### GPU
_No response_
### CPU
AMD
### Ollama version
0.1.33-rc5
|
{
"login": "katmai",
"id": 1245160,
"node_id": "MDQ6VXNlcjEyNDUxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1245160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katmai",
"html_url": "https://github.com/katmai",
"followers_url": "https://api.github.com/users/katmai/followers",
"following_url": "https://api.github.com/users/katmai/following{/other_user}",
"gists_url": "https://api.github.com/users/katmai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/katmai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katmai/subscriptions",
"organizations_url": "https://api.github.com/users/katmai/orgs",
"repos_url": "https://api.github.com/users/katmai/repos",
"events_url": "https://api.github.com/users/katmai/events{/privacy}",
"received_events_url": "https://api.github.com/users/katmai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4166/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6397
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6397/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6397/comments
|
https://api.github.com/repos/ollama/ollama/issues/6397/events
|
https://github.com/ollama/ollama/issues/6397
| 2,471,457,487
|
I_kwDOJ0Z1Ps6TT3LP
| 6,397
|
Warning "Prompt does not specify that the LLM should response in JSON, but JSON format is expected."
|
{
"login": "itinance",
"id": 1758597,
"node_id": "MDQ6VXNlcjE3NTg1OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1758597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itinance",
"html_url": "https://github.com/itinance",
"followers_url": "https://api.github.com/users/itinance/followers",
"following_url": "https://api.github.com/users/itinance/following{/other_user}",
"gists_url": "https://api.github.com/users/itinance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itinance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itinance/subscriptions",
"organizations_url": "https://api.github.com/users/itinance/orgs",
"repos_url": "https://api.github.com/users/itinance/repos",
"events_url": "https://api.github.com/users/itinance/events{/privacy}",
"received_events_url": "https://api.github.com/users/itinance/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-08-17T09:56:27
| 2024-12-02T21:51:12
| 2024-12-02T21:51:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
As written in title, from where comes this message using llama 3.1?
In my prompt, it clearly states
> Answer in JSON. The JSON should be a list (length 5) of dictionaries whose keys are "missing_entities" and "denser_summary".
> Properly escape all special characters, especially double quotes within strings.
running `sudo journalctl -u ollama --no-pager -f`, I see this log message then:
_Prompt does not specify that the LLM should response in JSON, but JSON format is expected._
The response is valid JSON, so that is not an issue. However, the message indicates that some pre-processing step seems to not be able to "read" the prompt catefully.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6397/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6739
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6739/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6739/comments
|
https://api.github.com/repos/ollama/ollama/issues/6739/events
|
https://github.com/ollama/ollama/pull/6739
| 2,518,273,570
|
PR_kwDOJ0Z1Ps57FTDW
| 6,739
|
add "stop" command
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-09-11T01:44:17
| 2024-09-29T12:15:10
| 2024-09-11T23:36:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6739",
"html_url": "https://github.com/ollama/ollama/pull/6739",
"diff_url": "https://github.com/ollama/ollama/pull/6739.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6739.patch",
"merged_at": "2024-09-11T23:36:22"
}
|
This change adds the `ollama stop <model>` command which can be used to stop a running model. This is triggered by a call to `/api/generate` with an empty prompt and duration 0, or to `/api/chat` with empty messages and duration 0. The model will *expire*, but won't force being unloaded until any currently running instances have finished inference. When it's in this state it will show `Stopping...` in `ollama ps` to show that it's about to stop.
In the future we can add a _force_ argument to immediately unload a model and stop all inference, but that's beyond the scope of this change.
Fixes #6738 #4764
Supercedes #5328
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6739/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6739/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2260
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2260/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2260/comments
|
https://api.github.com/repos/ollama/ollama/issues/2260/events
|
https://github.com/ollama/ollama/issues/2260
| 2,106,267,510
|
I_kwDOJ0Z1Ps59ixd2
| 2,260
|
Unable to rebuild Ollama due to llm/payload_linux.go:7:12: pattern llama.cpp/build/linux/*/*/lib/*.so*: no matching files found
|
{
"login": "Solomin0",
"id": 37559666,
"node_id": "MDQ6VXNlcjM3NTU5NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/37559666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Solomin0",
"html_url": "https://github.com/Solomin0",
"followers_url": "https://api.github.com/users/Solomin0/followers",
"following_url": "https://api.github.com/users/Solomin0/following{/other_user}",
"gists_url": "https://api.github.com/users/Solomin0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Solomin0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Solomin0/subscriptions",
"organizations_url": "https://api.github.com/users/Solomin0/orgs",
"repos_url": "https://api.github.com/users/Solomin0/repos",
"events_url": "https://api.github.com/users/Solomin0/events{/privacy}",
"received_events_url": "https://api.github.com/users/Solomin0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2024-01-29T19:32:52
| 2024-03-11T18:49:13
| 2024-03-11T18:49:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am following the developer instructions.
Started by "git clone [ollama](https://github.com/ollama/ollama.git)"
I have go, cmake, and gcc.
I was able to sucessfully run "go generate ./"
"go build ." failed and gave me this error
```root@9a44dfc68b7a:/workspace/a2/ollama# go generate ./
root@9a44dfc68b7a:/workspace/a2/ollama# go build .
llm/payload_linux.go:7:12: pattern llama.cpp/build/linux/*/*/lib/*.so*: no matching files found
root@9a44dfc68b7a:/workspace/a2/ollama#
```
What do I need to change or what can I do? I have no experience with go so a point in the right direction to start would even be helpful.
My main goal is only to change the timeout function of ollama so if there is another way to do that please let me know.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2260/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2260/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3374
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3374/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3374/comments
|
https://api.github.com/repos/ollama/ollama/issues/3374/events
|
https://github.com/ollama/ollama/issues/3374
| 2,211,288,547
|
I_kwDOJ0Z1Ps6DzZXj
| 3,374
|
Models disappear, service becomes unusable after following instructions for making ollama available on network.
|
{
"login": "applebiter",
"id": 43894661,
"node_id": "MDQ6VXNlcjQzODk0NjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/43894661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/applebiter",
"html_url": "https://github.com/applebiter",
"followers_url": "https://api.github.com/users/applebiter/followers",
"following_url": "https://api.github.com/users/applebiter/following{/other_user}",
"gists_url": "https://api.github.com/users/applebiter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/applebiter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/applebiter/subscriptions",
"organizations_url": "https://api.github.com/users/applebiter/orgs",
"repos_url": "https://api.github.com/users/applebiter/repos",
"events_url": "https://api.github.com/users/applebiter/events{/privacy}",
"received_events_url": "https://api.github.com/users/applebiter/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-27T16:45:18
| 2024-03-27T16:50:53
| 2024-03-27T16:50:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I installed ollama easily, downloaded around 50gb worth of models, testing software locally, fine. Realized I needed to be able to access ollama on other network machines and get those results asynchronously, rather than using the supplied Python library on same host. I changed the host to 0.0.0.0 and it seemed to be working great from the machine where I was testing. Shut everything down, wake up the next morning, reboot, discover ollama doesn't want to start and all of the models are gone.
### What did you expect to see?
I expected the models to still be there. I don't even understand why this is a thing in the first place. I thought I was in control over the software but apparently I am not and do not understand its operation well enough.
### Steps to reproduce
Get it up and running in vanilla, default settings. Download models. Change the network address using systemctl edit as suggested. Reboot host machine and enjoy chaos. I also expect to see a warning that this can happen, frankly.
### Are there any recent changes that introduced the issue?
I changed the network host to 0.0.0.0.
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.29
### GPU
AMD
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3374/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4484
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4484/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4484/comments
|
https://api.github.com/repos/ollama/ollama/issues/4484/events
|
https://github.com/ollama/ollama/issues/4484
| 2,301,656,475
|
I_kwDOJ0Z1Ps6JMH2b
| 4,484
|
Gemma:latest aborted (core dumped)
|
{
"login": "ManuLinares",
"id": 3720020,
"node_id": "MDQ6VXNlcjM3MjAwMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3720020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ManuLinares",
"html_url": "https://github.com/ManuLinares",
"followers_url": "https://api.github.com/users/ManuLinares/followers",
"following_url": "https://api.github.com/users/ManuLinares/following{/other_user}",
"gists_url": "https://api.github.com/users/ManuLinares/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ManuLinares/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ManuLinares/subscriptions",
"organizations_url": "https://api.github.com/users/ManuLinares/orgs",
"repos_url": "https://api.github.com/users/ManuLinares/repos",
"events_url": "https://api.github.com/users/ManuLinares/events{/privacy}",
"received_events_url": "https://api.github.com/users/ManuLinares/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-05-17T01:52:31
| 2024-07-25T20:08:23
| 2024-07-25T20:08:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama server logs: https://pastebin.com/DyqQwwFf
ollama run gemma:latest #output
`Error: llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/home/mb/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77'
`
Worked on the previous, 0.1.36
System Archlinux
linux 6.8.9.arch1-2
ollama-cuda 0.1.38-1
cuda 12.4.1-4
nvidia-dkms 550.78-1
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.38
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4484/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4484/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4181
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4181/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4181/comments
|
https://api.github.com/repos/ollama/ollama/issues/4181/events
|
https://github.com/ollama/ollama/issues/4181
| 2,279,708,054
|
I_kwDOJ0Z1Ps6H4ZWW
| 4,181
|
pull qwen:14b-chat-v1.5-fp16 Error: EOF
|
{
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.com/users/MarkWard0110/followers",
"following_url": "https://api.github.com/users/MarkWard0110/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkWard0110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkWard0110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkWard0110/subscriptions",
"organizations_url": "https://api.github.com/users/MarkWard0110/orgs",
"repos_url": "https://api.github.com/users/MarkWard0110/repos",
"events_url": "https://api.github.com/users/MarkWard0110/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkWard0110/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-05T20:09:20
| 2024-05-06T18:34:06
| 2024-05-06T18:34:06
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
`ollama pull qwen:14b-chat-v1.5-fp16` results in `Error: EOF`
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4181/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3759
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3759/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3759/comments
|
https://api.github.com/repos/ollama/ollama/issues/3759/events
|
https://github.com/ollama/ollama/issues/3759
| 2,253,500,555
|
I_kwDOJ0Z1Ps6GUbCL
| 3,759
|
llama3-instruct models not stopping at stop token
|
{
"login": "moyix",
"id": 34380,
"node_id": "MDQ6VXNlcjM0Mzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/34380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moyix",
"html_url": "https://github.com/moyix",
"followers_url": "https://api.github.com/users/moyix/followers",
"following_url": "https://api.github.com/users/moyix/following{/other_user}",
"gists_url": "https://api.github.com/users/moyix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moyix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moyix/subscriptions",
"organizations_url": "https://api.github.com/users/moyix/orgs",
"repos_url": "https://api.github.com/users/moyix/repos",
"events_url": "https://api.github.com/users/moyix/events{/privacy}",
"received_events_url": "https://api.github.com/users/moyix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 47
| 2024-04-19T17:19:05
| 2024-06-25T04:32:16
| 2024-06-25T04:32:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm using `llama3:70b` through the OpenAI-compatible endpoint. When generating, I am getting outputs like this:
```
Please provide the output of the above command.
Let's proceed from
here!<|eot_id|><|start_header_id|>assistant<|end_header_id|>
It seems that I made a mistake. Radare2 does not have a command called
radebol. Instead, we can use r2 to analyze the binary.
Here's the correct command:
```
This is probably related to https://github.com/vllm-project/vllm/issues/4180 ? There is also an issue/PR on the LLaMA 3 HuggingFace repo: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/discussions/4
But it's a bit confusing since `<|eot_id|>` is already included in the stop sequences:
```
$ ollama show --modelfile llama3:70b
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM llama3:70b
FROM /usr/share/ollama/.ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
```
Is there some other config param that needs to be updated?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3759/reactions",
"total_count": 20,
"+1": 19,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/3759/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5746
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5746/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5746/comments
|
https://api.github.com/repos/ollama/ollama/issues/5746/events
|
https://github.com/ollama/ollama/issues/5746
| 2,413,527,209
|
I_kwDOJ0Z1Ps6P24Cp
| 5,746
|
Endpoint for Listing models isn't working
|
{
"login": "Spirizeon",
"id": 123345456,
"node_id": "U_kgDOB1oaMA",
"avatar_url": "https://avatars.githubusercontent.com/u/123345456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Spirizeon",
"html_url": "https://github.com/Spirizeon",
"followers_url": "https://api.github.com/users/Spirizeon/followers",
"following_url": "https://api.github.com/users/Spirizeon/following{/other_user}",
"gists_url": "https://api.github.com/users/Spirizeon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Spirizeon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Spirizeon/subscriptions",
"organizations_url": "https://api.github.com/users/Spirizeon/orgs",
"repos_url": "https://api.github.com/users/Spirizeon/repos",
"events_url": "https://api.github.com/users/Spirizeon/events{/privacy}",
"received_events_url": "https://api.github.com/users/Spirizeon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-07-17T12:42:44
| 2024-07-17T13:07:28
| 2024-07-17T13:07:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I was going through the [api docs](https://github.com/ollama/ollama/blob/main/docs/api.md#list-running-models). upon sending the curl request for listing models, I come across this error.
I made sure that:
- [x] Ollama is running on my device on port 11434
- [x] All other endpoints are working properly
Command
```
curl http://localhost:11434/api/ps
```
Actual output
```
404 page not found
```
### Here's the Fix
Upon curling, the `tags` endpoint, we get the expected JSON response
```
curl http://localhost:11434/api/tags
```
If this solution is valid, requesting to create a PR for this. If not, requesting another possible solution/fix.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.34
|
{
"login": "Spirizeon",
"id": 123345456,
"node_id": "U_kgDOB1oaMA",
"avatar_url": "https://avatars.githubusercontent.com/u/123345456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Spirizeon",
"html_url": "https://github.com/Spirizeon",
"followers_url": "https://api.github.com/users/Spirizeon/followers",
"following_url": "https://api.github.com/users/Spirizeon/following{/other_user}",
"gists_url": "https://api.github.com/users/Spirizeon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Spirizeon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Spirizeon/subscriptions",
"organizations_url": "https://api.github.com/users/Spirizeon/orgs",
"repos_url": "https://api.github.com/users/Spirizeon/repos",
"events_url": "https://api.github.com/users/Spirizeon/events{/privacy}",
"received_events_url": "https://api.github.com/users/Spirizeon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5746/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7345
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7345/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7345/comments
|
https://api.github.com/repos/ollama/ollama/issues/7345/events
|
https://github.com/ollama/ollama/pull/7345
| 2,612,055,738
|
PR_kwDOJ0Z1Ps5_zGRm
| 7,345
|
Improve dependency gathering logic
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-24T16:22:20
| 2024-10-24T16:51:56
| 2024-10-24T16:51:53
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7345",
"html_url": "https://github.com/ollama/ollama/pull/7345",
"diff_url": "https://github.com/ollama/ollama/pull/7345.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7345.patch",
"merged_at": "2024-10-24T16:51:53"
}
|
This unfies the rocm/cuda dependency logic into the makefile and fixes a missing define which broke windows rocm
This backports the fixes needed to get 0.4.0-rc5 over the finish line to main to reduce the amount of content I have to carry on the branch.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7345/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6889
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6889/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6889/comments
|
https://api.github.com/repos/ollama/ollama/issues/6889/events
|
https://github.com/ollama/ollama/issues/6889
| 2,537,750,777
|
I_kwDOJ0Z1Ps6XQwD5
| 6,889
|
Qwen/Qwen2.5-Math
|
{
"login": "tuanlda78202",
"id": 77523560,
"node_id": "MDQ6VXNlcjc3NTIzNTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/77523560?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuanlda78202",
"html_url": "https://github.com/tuanlda78202",
"followers_url": "https://api.github.com/users/tuanlda78202/followers",
"following_url": "https://api.github.com/users/tuanlda78202/following{/other_user}",
"gists_url": "https://api.github.com/users/tuanlda78202/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuanlda78202/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuanlda78202/subscriptions",
"organizations_url": "https://api.github.com/users/tuanlda78202/orgs",
"repos_url": "https://api.github.com/users/tuanlda78202/repos",
"events_url": "https://api.github.com/users/tuanlda78202/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuanlda78202/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 8
| 2024-09-20T03:07:08
| 2025-01-19T06:39:38
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6889/reactions",
"total_count": 18,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/6889/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5062
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5062/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5062/comments
|
https://api.github.com/repos/ollama/ollama/issues/5062/events
|
https://github.com/ollama/ollama/issues/5062
| 2,354,834,520
|
I_kwDOJ0Z1Ps6MW-xY
| 5,062
|
Can't install on Fedora 40 due to nvidia driver 555 dependency graph problems
|
{
"login": "wiiznokes",
"id": 78230769,
"node_id": "MDQ6VXNlcjc4MjMwNzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/78230769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wiiznokes",
"html_url": "https://github.com/wiiznokes",
"followers_url": "https://api.github.com/users/wiiznokes/followers",
"following_url": "https://api.github.com/users/wiiznokes/following{/other_user}",
"gists_url": "https://api.github.com/users/wiiznokes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wiiznokes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wiiznokes/subscriptions",
"organizations_url": "https://api.github.com/users/wiiznokes/orgs",
"repos_url": "https://api.github.com/users/wiiznokes/repos",
"events_url": "https://api.github.com/users/wiiznokes/events{/privacy}",
"received_events_url": "https://api.github.com/users/wiiznokes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-06-15T12:47:15
| 2024-06-20T18:27:25
| 2024-06-20T18:27:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
> curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
######################################################################## 100.0%#=#=# ######################################################################## 100.0%
>>> Installing ollama to /usr/local/bin...
>>> Creating ollama user...
useradd : échec de la réinitialisation de l'entrée de lastlog de l'UID 969:Aucun fichier ou dossier de ce nom
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> Installing NVIDIA repository...
Ajout du dépôt depuis : https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo
>>> Installing CUDA driver...
cuda-fedora37-x86_64 576 kB/s | 157 kB 00:00
Erreur :
Problème: requêtes conflictuelles
- package cuda-drivers-545.23.06-1.x86_64 from cuda-fedora37-x86_64 requires nvidia-driver >= 3:545.23.06, but none of the providers can be installed
- package cuda-drivers-545.23.08-1.x86_64 from cuda-fedora37-x86_64 requires nvidia-driver >= 3:545.23.08, but none of the providers can be installed
- package nvidia-driver-3:545.23.06-1.fc37.x86_64 from cuda-fedora37-x86_64 conflicts with xorg-x11-drv-nvidia provided by xorg-x11-drv-nvidia-3:550.90.07-1.fc40.x86_64 from @System
- package nvidia-driver-3:545.23.06-1.fc37.x86_64 from cuda-fedora37-x86_64 obsoletes xorg-x11-drv-nvidia provided by xorg-x11-drv-nvidia-3:550.90.07-1.fc40.x86_64 from @System
- package nvidia-driver-3:545.23.08-1.fc37.x86_64 from cuda-fedora37-x86_64 conflicts with xorg-x11-drv-nvidia provided by xorg-x11-drv-nvidia-3:550.90.07-1.fc40.x86_64 from @System
- package nvidia-driver-3:545.23.08-1.fc37.x86_64 from cuda-fedora37-x86_64 obsoletes xorg-x11-drv-nvidia provided by xorg-x11-drv-nvidia-3:550.90.07-1.fc40.x86_64 from @System
- package kmod-nvidia-6.8.11-300.fc40.x86_64-3:550.90.07-1.fc40.x86_64 from @System requires nvidia-kmod-common >= 3:550.90.07, but none of the providers can be installed
- problème avec le paquet installé kmod-nvidia-6.8.11-300.fc40.x86_64-3:550.90.07-1.fc40.x86_64
- package xorg-x11-drv-nvidia-3:550.90.07-1.fc40.x86_64 from rpmfusion-nonfree-nvidia-driver is filtered out by modular filtering
- package xorg-x11-drv-nvidia-cuda-3:550.90.07-1.fc40.x86_64 from rpmfusion-nonfree-nvidia-driver is filtered out by modular filtering
(essayez d’ajouter « --allowerasing » à la ligne de commande pour remplacer les paquets en conflit ou « --skip-broken » pour ignorer les paquets non installables)
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.44
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5062/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2413
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2413/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2413/comments
|
https://api.github.com/repos/ollama/ollama/issues/2413/events
|
https://github.com/ollama/ollama/pull/2413
| 2,125,886,121
|
PR_kwDOJ0Z1Ps5majNf
| 2,413
|
slog gin logging
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-02-08T19:07:30
| 2024-08-28T23:18:53
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2413",
"html_url": "https://github.com/ollama/ollama/pull/2413",
"diff_url": "https://github.com/ollama/ollama/pull/2413.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2413.patch",
"merged_at": null
}
|
use slog for gin logging. this produces a log output consistent with the rest of ollama's logs as well as exposing additional details from requests
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2413/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4689
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4689/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4689/comments
|
https://api.github.com/repos/ollama/ollama/issues/4689/events
|
https://github.com/ollama/ollama/pull/4689
| 2,321,976,911
|
PR_kwDOJ0Z1Ps5w0GS3
| 4,689
|
Fix wsl2 status check for nvidia cards
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-28T21:49:34
| 2024-05-28T21:49:47
| 2024-05-28T21:49:47
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4689",
"html_url": "https://github.com/ollama/ollama/pull/4689",
"diff_url": "https://github.com/ollama/ollama/pull/4689.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4689.patch",
"merged_at": "2024-05-28T21:49:47"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4689/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5355
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5355/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5355/comments
|
https://api.github.com/repos/ollama/ollama/issues/5355/events
|
https://github.com/ollama/ollama/issues/5355
| 2,379,665,376
|
I_kwDOJ0Z1Ps6N1s_g
| 5,355
|
Groq's "name" option within "messages" parameter of the chat endpoint payload
|
{
"login": "pmgh2345",
"id": 16248399,
"node_id": "MDQ6VXNlcjE2MjQ4Mzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16248399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pmgh2345",
"html_url": "https://github.com/pmgh2345",
"followers_url": "https://api.github.com/users/pmgh2345/followers",
"following_url": "https://api.github.com/users/pmgh2345/following{/other_user}",
"gists_url": "https://api.github.com/users/pmgh2345/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pmgh2345/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pmgh2345/subscriptions",
"organizations_url": "https://api.github.com/users/pmgh2345/orgs",
"repos_url": "https://api.github.com/users/pmgh2345/repos",
"events_url": "https://api.github.com/users/pmgh2345/events{/privacy}",
"received_events_url": "https://api.github.com/users/pmgh2345/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 3
| 2024-06-28T06:10:39
| 2024-11-06T01:13:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Groq has a really helpful option within the "messages" parameter that one would provide in the payload to their chat completion API that helps the model distinguish between multiple users of the same role. This is helpful for situations in which a multi-agent conversation is taking place and there is more than one agent in a given role--for example, a roleplay in which we want to differentiate between an assistant representing one personality with its own system prompt talking to another assistant representing another agent with its own system prompt, all chatting with the user role.
For more on this, please see the following link to the Groq docs https://console.groq.com/docs/text-chat under the "messages" parameter.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5355/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5355/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6811
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6811/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6811/comments
|
https://api.github.com/repos/ollama/ollama/issues/6811/events
|
https://github.com/ollama/ollama/issues/6811
| 2,526,766,147
|
I_kwDOJ0Z1Ps6Wm2RD
| 6,811
|
iiiorg/piiranha-v1-detect-personal
|
{
"login": "myrulezzz",
"id": 43094013,
"node_id": "MDQ6VXNlcjQzMDk0MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/43094013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myrulezzz",
"html_url": "https://github.com/myrulezzz",
"followers_url": "https://api.github.com/users/myrulezzz/followers",
"following_url": "https://api.github.com/users/myrulezzz/following{/other_user}",
"gists_url": "https://api.github.com/users/myrulezzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/myrulezzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/myrulezzz/subscriptions",
"organizations_url": "https://api.github.com/users/myrulezzz/orgs",
"repos_url": "https://api.github.com/users/myrulezzz/repos",
"events_url": "https://api.github.com/users/myrulezzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/myrulezzz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-15T06:36:52
| 2025-01-06T07:32:06
| 2025-01-06T07:32:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6811/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3150
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3150/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3150/comments
|
https://api.github.com/repos/ollama/ollama/issues/3150/events
|
https://github.com/ollama/ollama/issues/3150
| 2,187,164,743
|
I_kwDOJ0Z1Ps6CXXxH
| 3,150
|
ollama 0.1.29 does not work on kubernetes with amd gpu
|
{
"login": "arno4000",
"id": 50365065,
"node_id": "MDQ6VXNlcjUwMzY1MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50365065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arno4000",
"html_url": "https://github.com/arno4000",
"followers_url": "https://api.github.com/users/arno4000/followers",
"following_url": "https://api.github.com/users/arno4000/following{/other_user}",
"gists_url": "https://api.github.com/users/arno4000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arno4000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arno4000/subscriptions",
"organizations_url": "https://api.github.com/users/arno4000/orgs",
"repos_url": "https://api.github.com/users/arno4000/repos",
"events_url": "https://api.github.com/users/arno4000/events{/privacy}",
"received_events_url": "https://api.github.com/users/arno4000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-03-14T19:58:06
| 2024-03-15T01:12:43
| 2024-03-14T20:44:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I installed the amd rocm k8s device plugin https://github.com/ROCm/k8s-device-plugin on a fresh k3s installation. Then I installed ollama using the following deployment:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ollama
spec:
selector:
matchLabels:
app: ollama
template:
metadata:
labels:
app: ollama
spec:
containers:
- name: ollama
image: ollama/ollama:0.1.29
command: ["ollama", "serve"]
resources:
limits:
memory: "6Gi"
cpu: "6000m"
amd.com/gpu: "1"
ports:
- containerPort: 11434
---
apiVersion: v1
kind: Service
metadata:
name: ollama
spec:
selector:
app: ollama
ports:
- port: 11434
targetPort: 11434
```
ollama says in the logs that it has found an amd gpu, but cannot find the suitable rocm version. Is this an issue of the rocm k8s plugin, or is there something missing in the container image? I am using an RX 7900 XTX as GPU and Endeavour OS as operating system.
ollama logs:
```
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOHLmaRRPPGH4cO7njJaDeduLcdEa8TFZQIOpVpzUMQ1
time=2024-03-14T19:49:51.293Z level=INFO source=images.go:806 msg="total blobs: 0"
time=2024-03-14T19:49:51.293Z level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-14T19:49:51.293Z level=INFO source=routes.go:1110 msg="Listening on [::]:11434 (version 0.1.29)"
time=2024-03-14T19:49:51.294Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama4293249849/runners ..."
time=2024-03-14T19:49:52.703Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [rocm_v60000 cpu_avx cuda_v11 cpu cpu_avx2]"
time=2024-03-14T19:49:52.703Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-14T19:49:52.703Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-14T19:49:52.703Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"
time=2024-03-14T19:49:52.703Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-14T19:49:52.703Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-14T19:49:52.703Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1100]"
time=2024-03-14T19:49:52.703Z level=WARN source=amd_linux.go:350 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
time=2024-03-14T19:49:52.703Z level=WARN source=amd_linux.go:99 msg="unable to verify rocm library, will use cpu: no suitable rocm found, falling back to CPU"
time=2024-03-14T19:49:52.703Z level=INFO source=routes.go:1133 msg="no GPU detected"
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3150/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6358
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6358/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6358/comments
|
https://api.github.com/repos/ollama/ollama/issues/6358/events
|
https://github.com/ollama/ollama/issues/6358
| 2,465,364,208
|
I_kwDOJ0Z1Ps6S8njw
| 6,358
|
Segmentation fault
|
{
"login": "yicheng-2019",
"id": 58028882,
"node_id": "MDQ6VXNlcjU4MDI4ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/58028882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yicheng-2019",
"html_url": "https://github.com/yicheng-2019",
"followers_url": "https://api.github.com/users/yicheng-2019/followers",
"following_url": "https://api.github.com/users/yicheng-2019/following{/other_user}",
"gists_url": "https://api.github.com/users/yicheng-2019/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yicheng-2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yicheng-2019/subscriptions",
"organizations_url": "https://api.github.com/users/yicheng-2019/orgs",
"repos_url": "https://api.github.com/users/yicheng-2019/repos",
"events_url": "https://api.github.com/users/yicheng-2019/events{/privacy}",
"received_events_url": "https://api.github.com/users/yicheng-2019/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-08-14T09:39:12
| 2024-09-04T00:38:41
| 2024-09-04T00:38:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I want to express my appreciation for the fantastic work done on this project. The capabilities and ease of use are truly impressive.
However, after successfully deploying and running a model using the Ollama remote API, I encountered an issue that I haven't been able to resolve. I would like to ask if you have any experience or suggestions regarding the following problem.
After running and deploying a model using the remote API of ollama for an extended period, I encountered a segmentation fault that now persists across all commands. Initially, the software functioned correctly, but after a period of operation, all ollama commands, including ollama list, now result in a segmentation fault. Even after reinstalling the latest version of ollama, the issue persists, rendering the tool completely unusable.
Actual Behavior:
Any attempt to run an ollama command results in a segmentation fault. Here is an example output.
(base) qiuyc@server21:~$ ollama -v
Segmentation fault
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
latest
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6358/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4122
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4122/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4122/comments
|
https://api.github.com/repos/ollama/ollama/issues/4122/events
|
https://github.com/ollama/ollama/issues/4122
| 2,277,158,096
|
I_kwDOJ0Z1Ps6HuqzQ
| 4,122
|
Delete models installed from Ollama in my Mac to free the space
|
{
"login": "ISK-VAGR",
"id": 122751844,
"node_id": "U_kgDOB1ELZA",
"avatar_url": "https://avatars.githubusercontent.com/u/122751844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ISK-VAGR",
"html_url": "https://github.com/ISK-VAGR",
"followers_url": "https://api.github.com/users/ISK-VAGR/followers",
"following_url": "https://api.github.com/users/ISK-VAGR/following{/other_user}",
"gists_url": "https://api.github.com/users/ISK-VAGR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ISK-VAGR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ISK-VAGR/subscriptions",
"organizations_url": "https://api.github.com/users/ISK-VAGR/orgs",
"repos_url": "https://api.github.com/users/ISK-VAGR/repos",
"events_url": "https://api.github.com/users/ISK-VAGR/events{/privacy}",
"received_events_url": "https://api.github.com/users/ISK-VAGR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-05-03T08:09:11
| 2025-01-29T22:42:43
| 2024-05-13T16:53:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
HI,
I installed two Llama models using "Ollama run" in the terminal. Those occupy a significant space in disk and I need to free space to install a different model.
I tried Ollama rm command, but it only deletes the file in the manifests folder which is KBs. I also tried to delete those files manually, but again those are KBs in size not GB as the real models.
I need a solution to delete the big files out o my system.
Any clues?
Any help will be appreciated
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4122/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2745
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2745/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2745/comments
|
https://api.github.com/repos/ollama/ollama/issues/2745/events
|
https://github.com/ollama/ollama/issues/2745
| 2,152,834,120
|
I_kwDOJ0Z1Ps6AUaRI
| 2,745
|
Ability to pull/push models from/to OCI registries
|
{
"login": "raffaelespazzoli",
"id": 6179036,
"node_id": "MDQ6VXNlcjYxNzkwMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6179036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raffaelespazzoli",
"html_url": "https://github.com/raffaelespazzoli",
"followers_url": "https://api.github.com/users/raffaelespazzoli/followers",
"following_url": "https://api.github.com/users/raffaelespazzoli/following{/other_user}",
"gists_url": "https://api.github.com/users/raffaelespazzoli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raffaelespazzoli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raffaelespazzoli/subscriptions",
"organizations_url": "https://api.github.com/users/raffaelespazzoli/orgs",
"repos_url": "https://api.github.com/users/raffaelespazzoli/repos",
"events_url": "https://api.github.com/users/raffaelespazzoli/events{/privacy}",
"received_events_url": "https://api.github.com/users/raffaelespazzoli/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-02-25T16:14:03
| 2025-01-19T18:39:43
| 2024-03-01T02:07:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
if we wish ollama to be adopted in traditional enterprises, it should be possible to pull/push models from/to private model registries.
I am not sure if there is a standard for model registries, but the OCI container registry spec[1] can be adapted to manage any type of content and OCI registries are readily available in enterprises.
It seems to make sense then to enhance ollama to be able to pull and push from/to OCI registries.
I'd like to know what the feedback from the community is on this idea. If positive, I'd be happy to try to contribute this feature.
[1] OCI distribution specification to be precise: https://github.com/opencontainers/distribution-spec
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2745/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2745/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7985
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7985/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7985/comments
|
https://api.github.com/repos/ollama/ollama/issues/7985/events
|
https://github.com/ollama/ollama/issues/7985
| 2,724,454,134
|
I_kwDOJ0Z1Ps6iY972
| 7,985
|
Change /delete endpoint to use POST request
|
{
"login": "anxkhn",
"id": 83116240,
"node_id": "MDQ6VXNlcjgzMTE2MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/83116240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anxkhn",
"html_url": "https://github.com/anxkhn",
"followers_url": "https://api.github.com/users/anxkhn/followers",
"following_url": "https://api.github.com/users/anxkhn/following{/other_user}",
"gists_url": "https://api.github.com/users/anxkhn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anxkhn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anxkhn/subscriptions",
"organizations_url": "https://api.github.com/users/anxkhn/orgs",
"repos_url": "https://api.github.com/users/anxkhn/repos",
"events_url": "https://api.github.com/users/anxkhn/events{/privacy}",
"received_events_url": "https://api.github.com/users/anxkhn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-12-07T07:58:12
| 2024-12-09T22:37:38
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**Description:**
The Ollama API's `/delete` endpoint currently uses a DELETE request with a message body to specify the model to be deleted. This directly conflicts with the HTTP specification, which states that the DELETE method should NOT have a request body. [1](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/DELETE) [2](https://apidog.com/articles/http-delete-request-body/)
This non-standard approach causes significant issues with HTTP libraries like `httpx`, which correctly adhere to the specification and do not support sending data in a DELETE request.
**Proposed Solution:**
* **Change the `/delete` endpoint to accept a POST request instead.** This aligns with how POST is used for actions that modify the server's state (in this case, deleting a model).
* The request body can remain the same (e.g., `{"model": "llama3:13b"}`) to specify the model to delete.
* Update the API documentation to clearly reflect this change.
**Benefits:**
* **Resolves the conflict with the HTTP specification.**
* **Ensures compatibility with a wider range of standard HTTP libraries.**
* Improves the developer experience by avoiding unexpected errors and workarounds.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7985/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7985/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8276
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8276/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8276/comments
|
https://api.github.com/repos/ollama/ollama/issues/8276/events
|
https://github.com/ollama/ollama/issues/8276
| 2,764,599,075
|
I_kwDOJ0Z1Ps6kyG8j
| 8,276
|
Ollama cannot load model after several hours on some GPUs
|
{
"login": "QichangZheng",
"id": 82627111,
"node_id": "MDQ6VXNlcjgyNjI3MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/82627111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QichangZheng",
"html_url": "https://github.com/QichangZheng",
"followers_url": "https://api.github.com/users/QichangZheng/followers",
"following_url": "https://api.github.com/users/QichangZheng/following{/other_user}",
"gists_url": "https://api.github.com/users/QichangZheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QichangZheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QichangZheng/subscriptions",
"organizations_url": "https://api.github.com/users/QichangZheng/orgs",
"repos_url": "https://api.github.com/users/QichangZheng/repos",
"events_url": "https://api.github.com/users/QichangZheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/QichangZheng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-12-31T18:28:27
| 2025-01-13T01:49:53
| 2025-01-13T01:49:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
It works well on L20 GPU.
When I switch to H20 GPU, ollama died after several hours. It cannot load model, when I run 'ollama run gemma2:2b', it takes infinite time (as shown in the screenshot). But 'ollama serve -h' works.

### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8276/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7053
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7053/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7053/comments
|
https://api.github.com/repos/ollama/ollama/issues/7053/events
|
https://github.com/ollama/ollama/pull/7053
| 2,558,016,439
|
PR_kwDOJ0Z1Ps59MODG
| 7,053
|
rocm: improve rocm compatibility
|
{
"login": "daniandtheweb",
"id": 57776841,
"node_id": "MDQ6VXNlcjU3Nzc2ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/57776841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daniandtheweb",
"html_url": "https://github.com/daniandtheweb",
"followers_url": "https://api.github.com/users/daniandtheweb/followers",
"following_url": "https://api.github.com/users/daniandtheweb/following{/other_user}",
"gists_url": "https://api.github.com/users/daniandtheweb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daniandtheweb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daniandtheweb/subscriptions",
"organizations_url": "https://api.github.com/users/daniandtheweb/orgs",
"repos_url": "https://api.github.com/users/daniandtheweb/repos",
"events_url": "https://api.github.com/users/daniandtheweb/events{/privacy}",
"received_events_url": "https://api.github.com/users/daniandtheweb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-10-01T02:14:27
| 2024-11-21T19:24:54
| 2024-11-21T19:24:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7053",
"html_url": "https://github.com/ollama/ollama/pull/7053",
"diff_url": "https://github.com/ollama/ollama/pull/7053.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7053.patch",
"merged_at": null
}
|
This PR allows the ROCm build process to work on Linux when using a self built ROCm stack using [rocm_sdk_builder](https://github.com/lamikr/rocm_sdk_builder/).
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7053/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4332
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4332/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4332/comments
|
https://api.github.com/repos/ollama/ollama/issues/4332/events
|
https://github.com/ollama/ollama/issues/4332
| 2,290,541,092
|
I_kwDOJ0Z1Ps6IhuIk
| 4,332
|
Difference in performance between liuhaotian/llava-v1.6-34b and Ollama's llava:34b-v1.6
|
{
"login": "EricWiener",
"id": 18071029,
"node_id": "MDQ6VXNlcjE4MDcxMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18071029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EricWiener",
"html_url": "https://github.com/EricWiener",
"followers_url": "https://api.github.com/users/EricWiener/followers",
"following_url": "https://api.github.com/users/EricWiener/following{/other_user}",
"gists_url": "https://api.github.com/users/EricWiener/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EricWiener/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EricWiener/subscriptions",
"organizations_url": "https://api.github.com/users/EricWiener/orgs",
"repos_url": "https://api.github.com/users/EricWiener/repos",
"events_url": "https://api.github.com/users/EricWiener/events{/privacy}",
"received_events_url": "https://api.github.com/users/EricWiener/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-05-11T00:07:54
| 2024-05-13T15:38:45
| 2024-05-13T15:38:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using the demo [here](https://llava.hliu.cc/) I get much better results using their `llava:34b-v1.6` than I do via `ollama run llava:34b-v1.6` when prompting with the same prompt followed by the image. Example of how I'm prompting:
```
ollama run llava:34b-v1.6 --verbose
>>> Is there a dog in this picture and if so what is it doing? /data/dog.png
```
Is there any reason for this and can I somehow match the performance of the demo?
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.34
|
{
"login": "EricWiener",
"id": 18071029,
"node_id": "MDQ6VXNlcjE4MDcxMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18071029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EricWiener",
"html_url": "https://github.com/EricWiener",
"followers_url": "https://api.github.com/users/EricWiener/followers",
"following_url": "https://api.github.com/users/EricWiener/following{/other_user}",
"gists_url": "https://api.github.com/users/EricWiener/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EricWiener/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EricWiener/subscriptions",
"organizations_url": "https://api.github.com/users/EricWiener/orgs",
"repos_url": "https://api.github.com/users/EricWiener/repos",
"events_url": "https://api.github.com/users/EricWiener/events{/privacy}",
"received_events_url": "https://api.github.com/users/EricWiener/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4332/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/876
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/876/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/876/comments
|
https://api.github.com/repos/ollama/ollama/issues/876/events
|
https://github.com/ollama/ollama/issues/876
| 1,956,035,879
|
I_kwDOJ0Z1Ps50lr0n
| 876
|
Linux: In console session ollama can't answer /show requests
|
{
"login": "byteconcepts",
"id": 33394779,
"node_id": "MDQ6VXNlcjMzMzk0Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/33394779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/byteconcepts",
"html_url": "https://github.com/byteconcepts",
"followers_url": "https://api.github.com/users/byteconcepts/followers",
"following_url": "https://api.github.com/users/byteconcepts/following{/other_user}",
"gists_url": "https://api.github.com/users/byteconcepts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/byteconcepts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/byteconcepts/subscriptions",
"organizations_url": "https://api.github.com/users/byteconcepts/orgs",
"repos_url": "https://api.github.com/users/byteconcepts/repos",
"events_url": "https://api.github.com/users/byteconcepts/events{/privacy}",
"received_events_url": "https://api.github.com/users/byteconcepts/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-22T19:59:56
| 2023-10-23T15:38:19
| 2023-10-23T15:38:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
On linux, ollama is run as a system-service and it's home directory is defined in it's system-service file.
Am I wrong? I thought, that when I start a ollama console session on a linux box, the console client is just a client that requests the ollama system-service via the api. - Is this wrong?
It seems pretty strange to me and makes absolutely no sense, that if I request some model information about the currently used model via...
/show modelfile
... ollama answers with...
error: couldn't get model
Error: stat /root/.ollama/models/manifests/registry.ollama.ai/library/llama2-uncensored/latest
...and then the client crashes.
(In this case, just for this demonstration, I started the client as user root, which noone would do normally.
If I start the client as a normal user, it's the same: ollama want's to look in the home directory of that user, which is also completely wrong.)
If not the ollama system-service, but the client program itself answers the /show commands, it should at least use the home directory of the ollama user. - In this case, the installation should provide us with the hint, that if we would like to run the ollama consol app, the user used for this must be added to the ollama usergroup.
Am I the only one who get's this error?
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/876/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3126
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3126/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3126/comments
|
https://api.github.com/repos/ollama/ollama/issues/3126/events
|
https://github.com/ollama/ollama/pull/3126
| 2,184,892,700
|
PR_kwDOJ0Z1Ps5pjbkR
| 3,126
|
server: replace (":") in blob names with ("-")
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-13T21:10:39
| 2024-03-14T18:29:06
| 2024-03-14T18:29:06
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3126",
"html_url": "https://github.com/ollama/ollama/pull/3126",
"diff_url": "https://github.com/ollama/ollama/pull/3126.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3126.patch",
"merged_at": null
}
|
This commit fixes issues where manually moving blobs from one OS that allows colons in file paths to another OS that does not allow colons.
The fix is to replace colons with dashes in the blob names on `ollama serve` but before binding and serving the blobs, and then consistently use the dash-replaced blob names throughout other server operations.
Fixes #2032
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3126/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1406
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1406/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1406/comments
|
https://api.github.com/repos/ollama/ollama/issues/1406/events
|
https://github.com/ollama/ollama/pull/1406
| 2,029,340,598
|
PR_kwDOJ0Z1Ps5hWNat
| 1,406
|
docs/tutorials: add bit on how to use Fly GPUs on-demand with Ollama
|
{
"login": "Xe",
"id": 529003,
"node_id": "MDQ6VXNlcjUyOTAwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/529003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xe",
"html_url": "https://github.com/Xe",
"followers_url": "https://api.github.com/users/Xe/followers",
"following_url": "https://api.github.com/users/Xe/following{/other_user}",
"gists_url": "https://api.github.com/users/Xe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xe/subscriptions",
"organizations_url": "https://api.github.com/users/Xe/orgs",
"repos_url": "https://api.github.com/users/Xe/repos",
"events_url": "https://api.github.com/users/Xe/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-06T20:36:01
| 2023-12-06T22:14:02
| 2023-12-06T22:14:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1406",
"html_url": "https://github.com/ollama/ollama/pull/1406",
"diff_url": "https://github.com/ollama/ollama/pull/1406.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1406.patch",
"merged_at": "2023-12-06T22:14:02"
}
|
Docs version of this post: https://fly.io/blog/scaling-llm-ollama/
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1406/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5543
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5543/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5543/comments
|
https://api.github.com/repos/ollama/ollama/issues/5543/events
|
https://github.com/ollama/ollama/issues/5543
| 2,395,769,002
|
I_kwDOJ0Z1Ps6OzIiq
| 5,543
|
Slow inference speed on RTX 3090.
|
{
"login": "Saniel0",
"id": 118857726,
"node_id": "U_kgDOBxWf_g",
"avatar_url": "https://avatars.githubusercontent.com/u/118857726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saniel0",
"html_url": "https://github.com/Saniel0",
"followers_url": "https://api.github.com/users/Saniel0/followers",
"following_url": "https://api.github.com/users/Saniel0/following{/other_user}",
"gists_url": "https://api.github.com/users/Saniel0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saniel0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saniel0/subscriptions",
"organizations_url": "https://api.github.com/users/Saniel0/orgs",
"repos_url": "https://api.github.com/users/Saniel0/repos",
"events_url": "https://api.github.com/users/Saniel0/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saniel0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-07-08T14:14:44
| 2024-10-15T23:41:01
| 2024-10-15T23:41:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am getting only about 60t/s compared to 85t/s in llama.cpp when running llama3-8B-q8_0. When I run ollama on RTX 4080 super, I get the same performance as in llama.cpp. I tried running both natively and in docker, results were the same.
That leads me to believe that something is not right, or is that expected behaviour? Thanks
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5543/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/538
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/538/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/538/comments
|
https://api.github.com/repos/ollama/ollama/issues/538/events
|
https://github.com/ollama/ollama/issues/538
| 1,899,181,524
|
I_kwDOJ0Z1Ps5xMzXU
| 538
|
Docs request: connecting to front end
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-09-15T23:12:10
| 2023-10-27T07:10:24
| 2023-10-27T07:10:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello ollama, excited to start using this!
I have a docs request:
- How to connect a front end to ollama
- Link example or suggested front ends
Thank you in advance!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/538/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5532
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5532/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5532/comments
|
https://api.github.com/repos/ollama/ollama/issues/5532/events
|
https://github.com/ollama/ollama/issues/5532
| 2,394,124,899
|
I_kwDOJ0Z1Ps6Os3Jj
| 5,532
|
Ollama CPU based don't run in a LXC (Host Kernel 6.8.4-3)
|
{
"login": "T-Herrmann-WI",
"id": 104737662,
"node_id": "U_kgDOBj4rfg",
"avatar_url": "https://avatars.githubusercontent.com/u/104737662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/T-Herrmann-WI",
"html_url": "https://github.com/T-Herrmann-WI",
"followers_url": "https://api.github.com/users/T-Herrmann-WI/followers",
"following_url": "https://api.github.com/users/T-Herrmann-WI/following{/other_user}",
"gists_url": "https://api.github.com/users/T-Herrmann-WI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/T-Herrmann-WI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/T-Herrmann-WI/subscriptions",
"organizations_url": "https://api.github.com/users/T-Herrmann-WI/orgs",
"repos_url": "https://api.github.com/users/T-Herrmann-WI/repos",
"events_url": "https://api.github.com/users/T-Herrmann-WI/events{/privacy}",
"received_events_url": "https://api.github.com/users/T-Herrmann-WI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 13
| 2024-07-07T16:35:05
| 2024-11-19T15:25:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I use Proxmox VE for virtualization. If I install ollama in a Linux VM it works fine. If I install Ollama in a LXC (Host Kernel 6.8.4-3) it don't works with CPU.
#####
ollama run tinyllama
Error: timed out waiting for llama runner to start - progress 1.0
#####
For LXC with Ollama and Nvidia GPU it works but not for CPU.
It make no difference to install it nativ (curl -fsSL https://ollama.com/install.sh | sh ) or use docker. I have no idea what the problem is maybe an Kernel Issue.
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 3,14,15
Off-line CPU(s) list: 0-2,4-13,16-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7282 16-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 104%
CPU max MHz: 2800,0000
CPU min MHz: 1500,0000
BogoMIPS: 5600,11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr
_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3
fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalign
sse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pst
ate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt
xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt
lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v
_spec_ctrl umip rdpid overflow_recov succor smca sev sev_e
### OS
Linux
### GPU
_No response_
### CPU
AMD
### Ollama version
0.1.48
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5532/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8596
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8596/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8596/comments
|
https://api.github.com/repos/ollama/ollama/issues/8596/events
|
https://github.com/ollama/ollama/issues/8596
| 2,811,682,370
|
I_kwDOJ0Z1Ps6nlt5C
| 8,596
|
Ollama on WSL2 detects GPU but timesout when running inference
|
{
"login": "rz1027",
"id": 53318196,
"node_id": "MDQ6VXNlcjUzMzE4MTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/53318196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rz1027",
"html_url": "https://github.com/rz1027",
"followers_url": "https://api.github.com/users/rz1027/followers",
"following_url": "https://api.github.com/users/rz1027/following{/other_user}",
"gists_url": "https://api.github.com/users/rz1027/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rz1027/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rz1027/subscriptions",
"organizations_url": "https://api.github.com/users/rz1027/orgs",
"repos_url": "https://api.github.com/users/rz1027/repos",
"events_url": "https://api.github.com/users/rz1027/events{/privacy}",
"received_events_url": "https://api.github.com/users/rz1027/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 5
| 2025-01-26T17:21:17
| 2025-01-28T04:38:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am using ManjaroWSL [https://github.com/sileshn/ManjaroWSL2] on Windows 11, ollama runs fine on WSL, detects my Nvidia 4070 on its start.
The thing is when I load a model and run it, I am facing this error:
`gpu VRAM usage didn't recover within timeout`
and it should that the process is offloaded to the CPU.
I had to install Ollama on the windows side, migrate all my models, and use Ollama API hosted on Windows side to use the GPU.
I also had several people in my team reporting the same problem.
Models I saw this problem with : llava:13b, it runs lightning fast on the windows side, but too slow on linux.
```
nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 561.09 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4070 ... On | 00000000:01:00.0 On | N/A |
| N/A 57C P0 27W / 105W | 7390MiB / 8188MiB | 42% Default |
| | | N/A |
```
### OS
WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8596/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4439
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4439/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4439/comments
|
https://api.github.com/repos/ollama/ollama/issues/4439/events
|
https://github.com/ollama/ollama/pull/4439
| 2,296,522,369
|
PR_kwDOJ0Z1Ps5vdV-Y
| 4,439
|
fix typo in modelfile generation
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-14T22:32:45
| 2024-05-14T22:34:30
| 2024-05-14T22:34:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4439",
"html_url": "https://github.com/ollama/ollama/pull/4439",
"diff_url": "https://github.com/ollama/ollama/pull/4439.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4439.patch",
"merged_at": "2024-05-14T22:34:29"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4439/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1417
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1417/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1417/comments
|
https://api.github.com/repos/ollama/ollama/issues/1417/events
|
https://github.com/ollama/ollama/issues/1417
| 2,031,118,988
|
I_kwDOJ0Z1Ps55EGqM
| 1,417
|
Cant pull model manifest
|
{
"login": "bw-Deejee",
"id": 31662809,
"node_id": "MDQ6VXNlcjMxNjYyODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/31662809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bw-Deejee",
"html_url": "https://github.com/bw-Deejee",
"followers_url": "https://api.github.com/users/bw-Deejee/followers",
"following_url": "https://api.github.com/users/bw-Deejee/following{/other_user}",
"gists_url": "https://api.github.com/users/bw-Deejee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bw-Deejee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bw-Deejee/subscriptions",
"organizations_url": "https://api.github.com/users/bw-Deejee/orgs",
"repos_url": "https://api.github.com/users/bw-Deejee/repos",
"events_url": "https://api.github.com/users/bw-Deejee/events{/privacy}",
"received_events_url": "https://api.github.com/users/bw-Deejee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2023-12-07T16:33:14
| 2024-12-07T15:27:13
| 2023-12-12T14:30:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I just installed ollama on a Azure VM.
Running `ollama run llama2` results in
`pulling manifest ⠴` for a couple minutes and eventually:
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp 34.120.132.20:443: connect: connection timed out
Also visiting the link, results in this response:
`{
"errors": [
{
"code": "MANIFEST_INVALID",
"message": "manifest invalid",
"detail": {}
}
]
}`
I've tried a lot of things seen in other issues, as im operating behind a proxy. But nothing seems to work even though my proxy works for other all other stuff. And the invalid Json response above leads me to believe that I might not be the problem. Please help
|
{
"login": "bw-Deejee",
"id": 31662809,
"node_id": "MDQ6VXNlcjMxNjYyODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/31662809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bw-Deejee",
"html_url": "https://github.com/bw-Deejee",
"followers_url": "https://api.github.com/users/bw-Deejee/followers",
"following_url": "https://api.github.com/users/bw-Deejee/following{/other_user}",
"gists_url": "https://api.github.com/users/bw-Deejee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bw-Deejee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bw-Deejee/subscriptions",
"organizations_url": "https://api.github.com/users/bw-Deejee/orgs",
"repos_url": "https://api.github.com/users/bw-Deejee/repos",
"events_url": "https://api.github.com/users/bw-Deejee/events{/privacy}",
"received_events_url": "https://api.github.com/users/bw-Deejee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1417/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/665
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/665/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/665/comments
|
https://api.github.com/repos/ollama/ollama/issues/665/events
|
https://github.com/ollama/ollama/issues/665
| 1,920,841,853
|
I_kwDOJ0Z1Ps5yfbh9
| 665
|
Nonsense in output
|
{
"login": "andersonbcdefg",
"id": 17210823,
"node_id": "MDQ6VXNlcjE3MjEwODIz",
"avatar_url": "https://avatars.githubusercontent.com/u/17210823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andersonbcdefg",
"html_url": "https://github.com/andersonbcdefg",
"followers_url": "https://api.github.com/users/andersonbcdefg/followers",
"following_url": "https://api.github.com/users/andersonbcdefg/following{/other_user}",
"gists_url": "https://api.github.com/users/andersonbcdefg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andersonbcdefg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andersonbcdefg/subscriptions",
"organizations_url": "https://api.github.com/users/andersonbcdefg/orgs",
"repos_url": "https://api.github.com/users/andersonbcdefg/repos",
"events_url": "https://api.github.com/users/andersonbcdefg/events{/privacy}",
"received_events_url": "https://api.github.com/users/andersonbcdefg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-10-01T16:06:11
| 2023-10-04T20:36:10
| 2023-10-04T20:35:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I've encountered this before when using llama.cpp on its own, but running ollama on my M2 I get random tokens sometimes in the output. (In particular, the model seems to like to produce the token "sierp". I can't be the only person who's experienced this. Not sure where this error comes from, but would be great if someone could figure it out and upstream it to GGML/Llama.cpp. Here's what it looks like:
Me: What's up?
Llama2: Hello! It's nice to meet you. **sierp!** How can I help you today? Do you
have any questions or topics you'd like to chat about?
Me: Repeat back your previous message.
Llama2: Sure, here is my previous message:
"Hello! It's nice to meet you. What's up?"
Would you like me to repeat anything else?
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/665/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4035
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4035/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4035/comments
|
https://api.github.com/repos/ollama/ollama/issues/4035/events
|
https://github.com/ollama/ollama/pull/4035
| 2,270,135,126
|
PR_kwDOJ0Z1Ps5uEq_p
| 4,035
|
Fix relative path lookup
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-29T23:01:00
| 2024-04-29T23:08:09
| 2024-04-29T23:08:07
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4035",
"html_url": "https://github.com/ollama/ollama/pull/4035",
"diff_url": "https://github.com/ollama/ollama/pull/4035.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4035.patch",
"merged_at": "2024-04-29T23:08:06"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4035/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8265
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8265/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8265/comments
|
https://api.github.com/repos/ollama/ollama/issues/8265/events
|
https://github.com/ollama/ollama/issues/8265
| 2,762,204,298
|
I_kwDOJ0Z1Ps6ko-SK
| 8,265
|
The Mac version cannot customize the listening port
|
{
"login": "Sze-chwan-er",
"id": 45889418,
"node_id": "MDQ6VXNlcjQ1ODg5NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45889418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sze-chwan-er",
"html_url": "https://github.com/Sze-chwan-er",
"followers_url": "https://api.github.com/users/Sze-chwan-er/followers",
"following_url": "https://api.github.com/users/Sze-chwan-er/following{/other_user}",
"gists_url": "https://api.github.com/users/Sze-chwan-er/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sze-chwan-er/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sze-chwan-er/subscriptions",
"organizations_url": "https://api.github.com/users/Sze-chwan-er/orgs",
"repos_url": "https://api.github.com/users/Sze-chwan-er/repos",
"events_url": "https://api.github.com/users/Sze-chwan-er/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sze-chwan-er/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-12-29T11:06:56
| 2024-12-30T18:56:24
| 2024-12-30T18:56:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
On Mac computers with the Apple chip, it is not possible to change the default listening port of ollama. Even I stoped ollama before I changed it. Attempting to change the listening port using the following commands as a regular user and an administrator does not work: `OLLAMA_HOST=0.0.0.0:11434`. The port that ollama listens still is`localhost:11434`, which prevents other computer devices in the local network from accessing the ollama service.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.4
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8265/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4978
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4978/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4978/comments
|
https://api.github.com/repos/ollama/ollama/issues/4978/events
|
https://github.com/ollama/ollama/issues/4978
| 2,346,179,242
|
I_kwDOJ0Z1Ps6L19qq
| 4,978
|
Systemd service on update
|
{
"login": "nerkdesign",
"id": 10208672,
"node_id": "MDQ6VXNlcjEwMjA4Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/10208672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nerkdesign",
"html_url": "https://github.com/nerkdesign",
"followers_url": "https://api.github.com/users/nerkdesign/followers",
"following_url": "https://api.github.com/users/nerkdesign/following{/other_user}",
"gists_url": "https://api.github.com/users/nerkdesign/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nerkdesign/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nerkdesign/subscriptions",
"organizations_url": "https://api.github.com/users/nerkdesign/orgs",
"repos_url": "https://api.github.com/users/nerkdesign/repos",
"events_url": "https://api.github.com/users/nerkdesign/events{/privacy}",
"received_events_url": "https://api.github.com/users/nerkdesign/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-11T11:31:20
| 2024-06-11T13:58:16
| 2024-06-11T13:58:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I used to set environments on ollama.service in systemd (origins, host...)
Is there a way to update ollama without erasing this service ?
Many thanks
|
{
"login": "nerkdesign",
"id": 10208672,
"node_id": "MDQ6VXNlcjEwMjA4Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/10208672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nerkdesign",
"html_url": "https://github.com/nerkdesign",
"followers_url": "https://api.github.com/users/nerkdesign/followers",
"following_url": "https://api.github.com/users/nerkdesign/following{/other_user}",
"gists_url": "https://api.github.com/users/nerkdesign/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nerkdesign/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nerkdesign/subscriptions",
"organizations_url": "https://api.github.com/users/nerkdesign/orgs",
"repos_url": "https://api.github.com/users/nerkdesign/repos",
"events_url": "https://api.github.com/users/nerkdesign/events{/privacy}",
"received_events_url": "https://api.github.com/users/nerkdesign/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4978/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/563
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/563/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/563/comments
|
https://api.github.com/repos/ollama/ollama/issues/563/events
|
https://github.com/ollama/ollama/pull/563
| 1,906,495,748
|
PR_kwDOJ0Z1Ps5a3R_p
| 563
|
Context saving/loading
|
{
"login": "voodooattack",
"id": 1285826,
"node_id": "MDQ6VXNlcjEyODU4MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1285826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/voodooattack",
"html_url": "https://github.com/voodooattack",
"followers_url": "https://api.github.com/users/voodooattack/followers",
"following_url": "https://api.github.com/users/voodooattack/following{/other_user}",
"gists_url": "https://api.github.com/users/voodooattack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/voodooattack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/voodooattack/subscriptions",
"organizations_url": "https://api.github.com/users/voodooattack/orgs",
"repos_url": "https://api.github.com/users/voodooattack/repos",
"events_url": "https://api.github.com/users/voodooattack/events{/privacy}",
"received_events_url": "https://api.github.com/users/voodooattack/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-09-21T09:12:25
| 2024-01-21T07:08:04
| 2024-01-21T00:12:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/563",
"html_url": "https://github.com/ollama/ollama/pull/563",
"diff_url": "https://github.com/ollama/ollama/pull/563.diff",
"patch_url": "https://github.com/ollama/ollama/pull/563.patch",
"merged_at": null
}
|
This adds support for saving a conversation/session and resuming it later. The context is (de)serialised as a plain JSON array to/from the specified file.
This adds the following commands:
```
/context save <filename>
/context load <filename>
```
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/563/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/563/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3573
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3573/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3573/comments
|
https://api.github.com/repos/ollama/ollama/issues/3573/events
|
https://github.com/ollama/ollama/issues/3573
| 2,235,213,795
|
I_kwDOJ0Z1Ps6FOqfj
| 3,573
|
Making separate standalone version for UsbDrives
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-04-10T10:05:00
| 2024-12-31T16:44:07
| 2024-04-19T14:56:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I recently installed Ollama on my Windows PC, by default it installs in c drive!
can you guys work on the standalone version of Ollama for the USB drive, so I can carry that USB anywhere and use it offline?
this thing with universalizing the AI, I have 512 GB of USB drive, but I cannot install it on my USB,
please this is the feature request from @sonareparth7!
### How should we solve this?
...
### What is the impact of not solving this?
...
### Anything else?
before the installation, it should ask where to install
and it should also ask where to install those models!
let's universalize this project!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3573/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1948
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1948/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1948/comments
|
https://api.github.com/repos/ollama/ollama/issues/1948/events
|
https://github.com/ollama/ollama/issues/1948
| 2,078,226,913
|
I_kwDOJ0Z1Ps573znh
| 1,948
|
Understanding Response Data Structure
|
{
"login": "tmattoneill",
"id": 4083139,
"node_id": "MDQ6VXNlcjQwODMxMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4083139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmattoneill",
"html_url": "https://github.com/tmattoneill",
"followers_url": "https://api.github.com/users/tmattoneill/followers",
"following_url": "https://api.github.com/users/tmattoneill/following{/other_user}",
"gists_url": "https://api.github.com/users/tmattoneill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tmattoneill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tmattoneill/subscriptions",
"organizations_url": "https://api.github.com/users/tmattoneill/orgs",
"repos_url": "https://api.github.com/users/tmattoneill/repos",
"events_url": "https://api.github.com/users/tmattoneill/events{/privacy}",
"received_events_url": "https://api.github.com/users/tmattoneill/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-12T07:23:02
| 2024-01-13T08:11:15
| 2024-01-13T01:09:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm really confused by Ollama's response from the API. Most other LLM's I've used return a consistent model / JSON object that can serve as the 'assistant' response. However, Ollama returns a different, seemingly random JSON / object every time. This makes it nearly impossible to extract the reply from any prompt. See below:
```
generate_response("Hello world")
Hello world
{'dialogue': {'bot': 'Hello! How can I help you today?', 'user': 'Hello world'}}
generate_response("Hello world")
Hello world
{'outputText': 'Hello, World!\n'}
generate_response("Hello world")
Hello world
{'message': 'Hello! How can I assist you today?'}
```
The code generating this is:
```
HOST = "localhost"
PORT = "11434"
api_request = {
"model": "mistral",
"stream": False,
"raw": True,
"format": "json",
"prompt": f"[INST]{prompt}[/INST]"
}
try:
response = requests.post(f"http://{HOST}:{PORT}/api/generate", json=api_request)
response.raise_for_status()
message = json.loads(response.text)['response']
response = json.loads(message)
except requests.exceptions.RequestException as e:
raise ValueError("Error making API request") from e
except json.JSONDecodeError as e:
raise ValueError("Error parsing API response") from e
```
Can someone explain this to me? I've been through the docs extensively and can not for the life of me figure out how to do this pretty straightforward task.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1948/timeline
| null |
not_planned
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.