url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/2020
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2020/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2020/comments
|
https://api.github.com/repos/ollama/ollama/issues/2020/events
|
https://github.com/ollama/ollama/pull/2020
| 2,084,726,453
|
PR_kwDOJ0Z1Ps5kPEDA
| 2,020
|
install: pin fedora to max 37
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-16T19:46:46
| 2024-01-18T22:23:43
| 2024-01-18T22:23:42
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2020",
"html_url": "https://github.com/ollama/ollama/pull/2020",
"diff_url": "https://github.com/ollama/ollama/pull/2020.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2020.patch",
"merged_at": "2024-01-18T22:23:42"
}
|
repos for fedora 38 and newer do not exist as of this commit
```
$ dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo
Adding repo from: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo
```
```
$ dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/fedora38/x86_64/cuda-fedora38.repo
Adding repo from: https://developer.download.nvidia.com/compute/cuda/repos/fedora38/x86_64/cuda-fedora38.repo
Status code: 404 for https://developer.download.nvidia.com/compute/cuda/repos/fedora38/x86_64/cuda-fedora38.repo (IP: 152.195.19.142)
Error: Configuration of repo failed
```
resolves #1993
resolves #1326
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2020/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5517
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5517/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5517/comments
|
https://api.github.com/repos/ollama/ollama/issues/5517/events
|
https://github.com/ollama/ollama/issues/5517
| 2,393,651,569
|
I_kwDOJ0Z1Ps6OrDlx
| 5,517
|
ggml_cuda_host_malloc: failed to allocate 2560.00 MiB of pinned memory: system has unsupported display driver / cuda driver combination
|
{
"login": "jtc1246",
"id": 43341500,
"node_id": "MDQ6VXNlcjQzMzQxNTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/43341500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jtc1246",
"html_url": "https://github.com/jtc1246",
"followers_url": "https://api.github.com/users/jtc1246/followers",
"following_url": "https://api.github.com/users/jtc1246/following{/other_user}",
"gists_url": "https://api.github.com/users/jtc1246/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jtc1246/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jtc1246/subscriptions",
"organizations_url": "https://api.github.com/users/jtc1246/orgs",
"repos_url": "https://api.github.com/users/jtc1246/repos",
"events_url": "https://api.github.com/users/jtc1246/events{/privacy}",
"received_events_url": "https://api.github.com/users/jtc1246/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-07-06T18:15:58
| 2024-07-11T03:06:25
| 2024-07-11T03:06:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I encountered this error when running `ollama run qwen2:72b-instruct-q5_K_M`,
```
ggml_cuda_host_malloc: failed to allocate 2560.00 MiB of pinned memory: system has unsupported display driver / cuda driver combination
llama_kv_cache_init: CPU KV buffer size = 2560.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
ggml_cuda_host_malloc: failed to allocate 2.45 MiB of pinned memory: system has unsupported display driver / cuda driver combination
llama_new_context_with_model: CPU output buffer size = 2.45 MiB
ggml_cuda_host_malloc: failed to allocate 1104.01 MiB of pinned memory: system has unsupported display driver / cuda driver combination
```
I have 4 tesla M40 24GB gpu,
```
root@I1b07ce634301701432:/hy-tmp# nvidia-smi
Sun Jul 7 02:07:26 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.08 Driver Version: 535.161.08 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla M40 24GB On | 00000000:02:00.0 Off | Off |
| N/A 37C P8 17W / 250W | 3MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 Tesla M40 24GB On | 00000000:03:00.0 Off | Off |
| N/A 37C P8 16W / 250W | 3MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 2 Tesla M40 24GB On | 00000000:81:00.0 Off | Off |
| N/A 36C P8 15W / 250W | 3MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 3 Tesla M40 24GB On | 00000000:82:00.0 Off | Off |
| N/A 37C P8 16W / 250W | 3MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
CUDA is also available in pytorch,
```
root@I1b07ce634301701432:/hy-tmp# python3
Python 3.11.8 (main, Feb 25 2024, 16:41:26) [GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.49-rc8
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5517/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/960
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/960/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/960/comments
|
https://api.github.com/repos/ollama/ollama/issues/960/events
|
https://github.com/ollama/ollama/pull/960
| 1,971,646,812
|
PR_kwDOJ0Z1Ps5eS6K7
| 960
|
fix tautology
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-01T03:50:18
| 2023-11-01T15:30:50
| 2023-11-01T15:30:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/960",
"html_url": "https://github.com/ollama/ollama/pull/960",
"diff_url": "https://github.com/ollama/ollama/pull/960.diff",
"patch_url": "https://github.com/ollama/ollama/pull/960.patch",
"merged_at": "2023-11-01T15:30:50"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/960/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3217
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3217/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3217/comments
|
https://api.github.com/repos/ollama/ollama/issues/3217/events
|
https://github.com/ollama/ollama/pull/3217
| 2,191,569,250
|
PR_kwDOJ0Z1Ps5p6FFJ
| 3,217
|
remove global
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-18T08:49:22
| 2024-03-18T09:13:31
| 2024-03-18T09:13:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3217",
"html_url": "https://github.com/ollama/ollama/pull/3217",
"diff_url": "https://github.com/ollama/ollama/pull/3217.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3217.patch",
"merged_at": "2024-03-18T09:13:30"
}
|
the global isn't being used
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3217/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2101
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2101/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2101/comments
|
https://api.github.com/repos/ollama/ollama/issues/2101/events
|
https://github.com/ollama/ollama/pull/2101
| 2,091,506,440
|
PR_kwDOJ0Z1Ps5kmKej
| 2,101
|
Sign dynamic libraries on macOS
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-19T22:15:03
| 2024-01-20T00:24:12
| 2024-01-20T00:24:11
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2101",
"html_url": "https://github.com/ollama/ollama/pull/2101",
"diff_url": "https://github.com/ollama/ollama/pull/2101.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2101.patch",
"merged_at": "2024-01-20T00:24:11"
}
|
Also fixes `gzip` from erroring if `.gz` files already exist
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2101/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5077
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5077/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5077/comments
|
https://api.github.com/repos/ollama/ollama/issues/5077/events
|
https://github.com/ollama/ollama/issues/5077
| 2,355,757,421
|
I_kwDOJ0Z1Ps6MagFt
| 5,077
|
inconsistent CUDA error on codellama on an AMD iGPU (gfx1103, unsupported, with override)
|
{
"login": "myyc",
"id": 5025392,
"node_id": "MDQ6VXNlcjUwMjUzOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5025392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myyc",
"html_url": "https://github.com/myyc",
"followers_url": "https://api.github.com/users/myyc/followers",
"following_url": "https://api.github.com/users/myyc/following{/other_user}",
"gists_url": "https://api.github.com/users/myyc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/myyc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/myyc/subscriptions",
"organizations_url": "https://api.github.com/users/myyc/orgs",
"repos_url": "https://api.github.com/users/myyc/repos",
"events_url": "https://api.github.com/users/myyc/events{/privacy}",
"received_events_url": "https://api.github.com/users/myyc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-06-16T13:04:20
| 2024-07-26T13:52:43
| 2024-07-26T13:52:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
i'm trying to use codellama. i get inconsistent experience depending on the question and i have no idea what is causing it. if i ask simple questions like "what is the capital of luxembourg" i get an answer right away. even longer answers (e.g. "where is germany") seem fine. when i ask coding questions though, such as "give me a shell command to find all files in /path created in the past five minutes" i get this
```
time=2024-06-16T15:00:22.313+02:00 level=WARN source=types.go:395 msg="invalid option provided" option=""
ggml_cuda_compute_forward: RMS_NORM failed
CUDA error: shared object initialization failed
current device: 0, in function ggml_cuda_compute_forward at /build/ollama/src/ollama-rocm/llm/llama.cpp/ggml-cuda.cu:2360
err
GGML_ASSERT: /build/ollama/src/ollama-rocm/llm/llama.cpp/ggml-cuda.cu:100: !"CUDA error"
```
i'm running ollama 0.1.44 with `HSA_OVERRIDE_GFX_VERSION=11.0.2` (11.0.0 seems ok also). all on arch linux so everything is up to date (as of 16th june 2024)
**edit**: implicitly mentioned but of course it is a bit strange to me that the error relates to CUDA since i don't have a nvidia card nor a CUDA launcher in the temp files. maybe this is how the ollama codebase works though? no idea
any clue?
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.44
|
{
"login": "myyc",
"id": 5025392,
"node_id": "MDQ6VXNlcjUwMjUzOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5025392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myyc",
"html_url": "https://github.com/myyc",
"followers_url": "https://api.github.com/users/myyc/followers",
"following_url": "https://api.github.com/users/myyc/following{/other_user}",
"gists_url": "https://api.github.com/users/myyc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/myyc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/myyc/subscriptions",
"organizations_url": "https://api.github.com/users/myyc/orgs",
"repos_url": "https://api.github.com/users/myyc/repos",
"events_url": "https://api.github.com/users/myyc/events{/privacy}",
"received_events_url": "https://api.github.com/users/myyc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5077/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/557
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/557/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/557/comments
|
https://api.github.com/repos/ollama/ollama/issues/557/events
|
https://github.com/ollama/ollama/pull/557
| 1,905,516,705
|
PR_kwDOJ0Z1Ps5az-gQ
| 557
|
fix impossible condition
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-20T18:28:15
| 2023-09-20T18:51:02
| 2023-09-20T18:51:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/557",
"html_url": "https://github.com/ollama/ollama/pull/557",
"diff_url": "https://github.com/ollama/ollama/pull/557.diff",
"patch_url": "https://github.com/ollama/ollama/pull/557.patch",
"merged_at": "2023-09-20T18:51:01"
}
|
split from #536
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/557/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/816
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/816/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/816/comments
|
https://api.github.com/repos/ollama/ollama/issues/816/events
|
https://github.com/ollama/ollama/issues/816
| 1,946,975,822
|
I_kwDOJ0Z1Ps50DH5O
| 816
|
Unable to provide site access to the model running on Ubuntu Virtual Machine
|
{
"login": "NishaDeepak",
"id": 68985503,
"node_id": "MDQ6VXNlcjY4OTg1NTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/68985503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NishaDeepak",
"html_url": "https://github.com/NishaDeepak",
"followers_url": "https://api.github.com/users/NishaDeepak/followers",
"following_url": "https://api.github.com/users/NishaDeepak/following{/other_user}",
"gists_url": "https://api.github.com/users/NishaDeepak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NishaDeepak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NishaDeepak/subscriptions",
"organizations_url": "https://api.github.com/users/NishaDeepak/orgs",
"repos_url": "https://api.github.com/users/NishaDeepak/repos",
"events_url": "https://api.github.com/users/NishaDeepak/events{/privacy}",
"received_events_url": "https://api.github.com/users/NishaDeepak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 7
| 2023-10-17T09:31:25
| 2023-12-04T20:15:18
| 2023-12-04T20:15:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I installed Ollama desktop in my ubuntu VM and ran the following commands:
$ OLLAMA_ORIGINS=https://webml-demo.vercel.app OLLAMA_HOST=127.0.0.1:11435 ollama serve
Then, in another window:
$ OLLAMA_HOST=127.0.0.1:11435 ollama pull mistral
But am unable to access the webpage.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/816/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4743
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4743/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4743/comments
|
https://api.github.com/repos/ollama/ollama/issues/4743/events
|
https://github.com/ollama/ollama/issues/4743
| 2,327,023,842
|
I_kwDOJ0Z1Ps6Ks5Di
| 4,743
|
how to update new version of OLLAMA on Windows 12
|
{
"login": "Jinish2170",
"id": 121560356,
"node_id": "U_kgDOBz7dJA",
"avatar_url": "https://avatars.githubusercontent.com/u/121560356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jinish2170",
"html_url": "https://github.com/Jinish2170",
"followers_url": "https://api.github.com/users/Jinish2170/followers",
"following_url": "https://api.github.com/users/Jinish2170/following{/other_user}",
"gists_url": "https://api.github.com/users/Jinish2170/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jinish2170/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jinish2170/subscriptions",
"organizations_url": "https://api.github.com/users/Jinish2170/orgs",
"repos_url": "https://api.github.com/users/Jinish2170/repos",
"events_url": "https://api.github.com/users/Jinish2170/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jinish2170/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396205,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2abQ",
"url": "https://api.github.com/repos/ollama/ollama/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-05-31T05:59:21
| 2024-11-07T13:09:57
| 2024-05-31T19:02:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am constantly getting notifications of new version available of OLLAMA but did not found a way to update the new version on windows
Please share any info regarding this query.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4743/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2960
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2960/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2960/comments
|
https://api.github.com/repos/ollama/ollama/issues/2960/events
|
https://github.com/ollama/ollama/issues/2960
| 2,172,323,349
|
I_kwDOJ0Z1Ps6BewYV
| 2,960
|
[Win11] mistral 7B performance down between 0.1.28 and 0.1.27
|
{
"login": "stevengans",
"id": 10685309,
"node_id": "MDQ6VXNlcjEwNjg1MzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/10685309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevengans",
"html_url": "https://github.com/stevengans",
"followers_url": "https://api.github.com/users/stevengans/followers",
"following_url": "https://api.github.com/users/stevengans/following{/other_user}",
"gists_url": "https://api.github.com/users/stevengans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevengans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevengans/subscriptions",
"organizations_url": "https://api.github.com/users/stevengans/orgs",
"repos_url": "https://api.github.com/users/stevengans/repos",
"events_url": "https://api.github.com/users/stevengans/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevengans/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 14
| 2024-03-06T20:02:49
| 2024-04-12T22:07:41
| 2024-04-12T22:07:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Using:
`curl http://localhost:11434/api/chat -d '{
"model": "mistral",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'`
Hardware:
Gpu: Nvidia RTX A5000
Cpu: Intel i5-12600K
Mem: 64GB
Os: Windows 11 21H2
Performance is x4 slower on calls from what I've witnessed.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2960/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5198
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5198/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5198/comments
|
https://api.github.com/repos/ollama/ollama/issues/5198/events
|
https://github.com/ollama/ollama/issues/5198
| 2,365,904,300
|
I_kwDOJ0Z1Ps6NBNWs
| 5,198
|
use_mmap: Error: invalid int value [false]
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-06-21T07:25:46
| 2024-07-03T20:59:43
| 2024-07-03T20:59:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When adding the `use_mmap` parameter to a modelfile, it used to be like this: `PARAMETER use_mmap false`. Since 0.1.45, this does not work, it wants an integer value. So I put `PARAMETER use_mmap 0`. This allows me to create the modelfile.
However, when attempting to run the model, it comes back with this error: `Error: option "use_mmap" must be of type boolean`
Seems like a cast was forgotten somewhere maybe?
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.45
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5198/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8538
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8538/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8538/comments
|
https://api.github.com/repos/ollama/ollama/issues/8538/events
|
https://github.com/ollama/ollama/issues/8538
| 2,805,096,920
|
I_kwDOJ0Z1Ps6nMmHY
| 8,538
|
Add support for the AI HAT+
|
{
"login": "sealad886",
"id": 155285242,
"node_id": "U_kgDOCUF2-g",
"avatar_url": "https://avatars.githubusercontent.com/u/155285242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sealad886",
"html_url": "https://github.com/sealad886",
"followers_url": "https://api.github.com/users/sealad886/followers",
"following_url": "https://api.github.com/users/sealad886/following{/other_user}",
"gists_url": "https://api.github.com/users/sealad886/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sealad886/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sealad886/subscriptions",
"organizations_url": "https://api.github.com/users/sealad886/orgs",
"repos_url": "https://api.github.com/users/sealad886/repos",
"events_url": "https://api.github.com/users/sealad886/events{/privacy}",
"received_events_url": "https://api.github.com/users/sealad886/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-22T18:40:15
| 2025-01-22T18:40:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Add support for the new AI HAT+ that can be added on to Raspberry Pi 5 [info here](https://www.raspberrypi.com/products/ai-hat/) to enable speedups.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8538/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2892
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2892/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2892/comments
|
https://api.github.com/repos/ollama/ollama/issues/2892/events
|
https://github.com/ollama/ollama/issues/2892
| 2,165,313,114
|
I_kwDOJ0Z1Ps6BEA5a
| 2,892
|
Introduce some smart way to identify and cleanup unfinished downloads to avoid space wastage
|
{
"login": "kha84",
"id": 110789576,
"node_id": "U_kgDOBpqDyA",
"avatar_url": "https://avatars.githubusercontent.com/u/110789576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kha84",
"html_url": "https://github.com/kha84",
"followers_url": "https://api.github.com/users/kha84/followers",
"following_url": "https://api.github.com/users/kha84/following{/other_user}",
"gists_url": "https://api.github.com/users/kha84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kha84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kha84/subscriptions",
"organizations_url": "https://api.github.com/users/kha84/orgs",
"repos_url": "https://api.github.com/users/kha84/repos",
"events_url": "https://api.github.com/users/kha84/events{/privacy}",
"received_events_url": "https://api.github.com/users/kha84/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-03T11:33:15
| 2024-03-04T08:10:31
| 2024-03-04T08:10:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**Is your feature request related to a problem? Please describe.**
If you run `ollama run/pull <some_new_model_you_didnt_have>` and then for whatever reason the downloading stops in-between (like network issues) or you interrupt it yourself (with control+c), the unfinished files are left (supposedly) in `~/.ollama/models/blobs/sha256:*-partial*` occupying quite a lot of space, depending on how big model files you were downloading and when exactly this was interrupted. Running `ollama list` doesn't show those unfinished downloads - so you don't see them, don't know they exist, can't remove them and reclaim space back. This leads to the hdd space wastage without user knowing
**Describe the solution you'd like**
There should be a way for ollama to have a **reconcile** procedure, to identify and warn user about whatever files are related to unfinished downloads (and optionally drop them). Ideally this reconcile code should be executed together with every other `ollama` command that user might spawn and appropriate warning message should be printed to stdout or stderr, that user has some unfinished downloads that might take a space.
**Describe alternatives you've considered**
1. Manually deleting files related to unfinished downloads (might be tricky for users, because the info about where ollama holds all it's internals and how it's organized isn't part of official documentation - it's kinda hidden under the hood)
2. `ollama list` to show unfinished downloads so they could be deleted with `ollama rm`
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2892/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6379
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6379/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6379/comments
|
https://api.github.com/repos/ollama/ollama/issues/6379/events
|
https://github.com/ollama/ollama/pull/6379
| 2,469,010,088
|
PR_kwDOJ0Z1Ps54gzdR
| 6,379
|
Separate ARM64 CPU builds from x64 CPU builds and use Clang instead
|
{
"login": "hmartinez82",
"id": 1100440,
"node_id": "MDQ6VXNlcjExMDA0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1100440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hmartinez82",
"html_url": "https://github.com/hmartinez82",
"followers_url": "https://api.github.com/users/hmartinez82/followers",
"following_url": "https://api.github.com/users/hmartinez82/following{/other_user}",
"gists_url": "https://api.github.com/users/hmartinez82/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hmartinez82/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hmartinez82/subscriptions",
"organizations_url": "https://api.github.com/users/hmartinez82/orgs",
"repos_url": "https://api.github.com/users/hmartinez82/repos",
"events_url": "https://api.github.com/users/hmartinez82/events{/privacy}",
"received_events_url": "https://api.github.com/users/hmartinez82/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-15T21:35:53
| 2025-01-04T02:42:57
| 2024-08-24T08:07:17
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6379",
"html_url": "https://github.com/ollama/ollama/pull/6379",
"diff_url": "https://github.com/ollama/ollama/pull/6379.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6379.patch",
"merged_at": null
}
|
This enhances https://github.com/ollama/ollama/pull/3972 by making builds that are much, much faster:
- Separate the ARM64 CPU builds from x64.
- Make the Low Common Denominator CPU, for ARM64, to be ARMv8.2-A and use the matrix multiplication kernels tailored for NEON and ARMv8.2-A (See https://justine.lol/matmul/).
- Create a separate runner built for ARMv8.7-A (e.g.: Snapdragon Plus/Elite X) that uses the MATMUL instructions.
The build instructions for ARM64 stay the same as in https://github.com/ollama/ollama/pull/5268. Clang was already required to build the llama.cpp static_library anyway so switching the runners to be built with Clang (on ARM64) doesn't add new requirements.
|
{
"login": "hmartinez82",
"id": 1100440,
"node_id": "MDQ6VXNlcjExMDA0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1100440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hmartinez82",
"html_url": "https://github.com/hmartinez82",
"followers_url": "https://api.github.com/users/hmartinez82/followers",
"following_url": "https://api.github.com/users/hmartinez82/following{/other_user}",
"gists_url": "https://api.github.com/users/hmartinez82/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hmartinez82/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hmartinez82/subscriptions",
"organizations_url": "https://api.github.com/users/hmartinez82/orgs",
"repos_url": "https://api.github.com/users/hmartinez82/repos",
"events_url": "https://api.github.com/users/hmartinez82/events{/privacy}",
"received_events_url": "https://api.github.com/users/hmartinez82/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6379/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6379/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6398
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6398/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6398/comments
|
https://api.github.com/repos/ollama/ollama/issues/6398/events
|
https://github.com/ollama/ollama/issues/6398
| 2,471,533,429
|
I_kwDOJ0Z1Ps6TUJt1
| 6,398
|
When running ollama via docker, it won't respond to any request by API-call or python-client-library
|
{
"login": "itinance",
"id": 1758597,
"node_id": "MDQ6VXNlcjE3NTg1OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1758597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itinance",
"html_url": "https://github.com/itinance",
"followers_url": "https://api.github.com/users/itinance/followers",
"following_url": "https://api.github.com/users/itinance/following{/other_user}",
"gists_url": "https://api.github.com/users/itinance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itinance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itinance/subscriptions",
"organizations_url": "https://api.github.com/users/itinance/orgs",
"repos_url": "https://api.github.com/users/itinance/repos",
"events_url": "https://api.github.com/users/itinance/events{/privacy}",
"received_events_url": "https://api.github.com/users/itinance/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 25
| 2024-08-17T13:51:05
| 2024-08-30T11:08:00
| 2024-08-17T22:22:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I setup the nvidia docker toolkit sucessfully on my Ubuntu 22 Machine with a RTX-4000, and start ollama as docker-container with exposed port 11434:
`docker run -d --gpus=all --env OLLAMA_NUM_PARALLEL=1 -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama`
After that, "docker ps" shows:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bb799064d233 ollama/ollama "/bin/ollama serve" 38 minutes ago Up 38 minutes 0.0.0.0:11434->11434/tcp ollama´
```
Starting a conversation in CLI works perfect:
`docker exec -it ollama ollama run llama3`
```
>>> hello
Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
```
But when I want to do a CURL request (or use the python-library for ollama), it hangs forever:
```
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"prompt":"Why is the sky blue?"
}'
```
Open Ports:
```
root@Ubuntu-2204-jammy-amd64-base ~ # sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:11434 0.0.0.0:* LISTEN 320493/docker-proxy
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 300450/nginx: maste
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 299159/sshd: /usr/s
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 299170/systemd-reso
tcp6 0 0 :::80 :::* LISTEN 300450/nginx: maste
tcp6 0 0 :::22 :::* LISTEN 299159/sshd: /usr/s
```
This works again when I start the ollama service directly on the machine, installed by
`curl -fsSL https://ollama.com/install.sh | sh?`
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6
|
{
"login": "itinance",
"id": 1758597,
"node_id": "MDQ6VXNlcjE3NTg1OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1758597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itinance",
"html_url": "https://github.com/itinance",
"followers_url": "https://api.github.com/users/itinance/followers",
"following_url": "https://api.github.com/users/itinance/following{/other_user}",
"gists_url": "https://api.github.com/users/itinance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itinance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itinance/subscriptions",
"organizations_url": "https://api.github.com/users/itinance/orgs",
"repos_url": "https://api.github.com/users/itinance/repos",
"events_url": "https://api.github.com/users/itinance/events{/privacy}",
"received_events_url": "https://api.github.com/users/itinance/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6398/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2566
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2566/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2566/comments
|
https://api.github.com/repos/ollama/ollama/issues/2566/events
|
https://github.com/ollama/ollama/issues/2566
| 2,140,516,414
|
I_kwDOJ0Z1Ps5_lbA-
| 2,566
|
Not enough vram available, falling back to CPU only, AMD 16 GB VRAM
|
{
"login": "user82622",
"id": 88026138,
"node_id": "MDQ6VXNlcjg4MDI2MTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/88026138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/user82622",
"html_url": "https://github.com/user82622",
"followers_url": "https://api.github.com/users/user82622/followers",
"following_url": "https://api.github.com/users/user82622/following{/other_user}",
"gists_url": "https://api.github.com/users/user82622/gists{/gist_id}",
"starred_url": "https://api.github.com/users/user82622/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/user82622/subscriptions",
"organizations_url": "https://api.github.com/users/user82622/orgs",
"repos_url": "https://api.github.com/users/user82622/repos",
"events_url": "https://api.github.com/users/user82622/events{/privacy}",
"received_events_url": "https://api.github.com/users/user82622/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-02-17T19:43:17
| 2024-05-19T16:09:51
| 2024-03-21T13:31:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I use an iGPU with ROCm and it worked great until like yesterday when i recompiled my Docker Image with the newest ollama version. since then I get "not enough vram available, falling back to CPU only" GPU seems to be detected.
```
time=xxx level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.6.0.60000 /opt/rocm-6.0.0/lib/librocm_smi64.so.6.0.60000]"
time=xxx level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=xxx level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[GIN] xxx | 200 | 4.592477ms | 192.168.33.14 | GET "/api/tags"
time=xxx level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=xxx level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=xxx level=INFO source=llm.go:111 msg="not enough vram available, falling back to CPU only"
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2566/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2566/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8406
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8406/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8406/comments
|
https://api.github.com/repos/ollama/ollama/issues/8406/events
|
https://github.com/ollama/ollama/issues/8406
| 2,785,519,640
|
I_kwDOJ0Z1Ps6mB6gY
| 8,406
|
Pulling models resets the download on raspberry pi 5
|
{
"login": "nexon33",
"id": 47557266,
"node_id": "MDQ6VXNlcjQ3NTU3MjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/47557266?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nexon33",
"html_url": "https://github.com/nexon33",
"followers_url": "https://api.github.com/users/nexon33/followers",
"following_url": "https://api.github.com/users/nexon33/following{/other_user}",
"gists_url": "https://api.github.com/users/nexon33/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nexon33/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nexon33/subscriptions",
"organizations_url": "https://api.github.com/users/nexon33/orgs",
"repos_url": "https://api.github.com/users/nexon33/repos",
"events_url": "https://api.github.com/users/nexon33/events{/privacy}",
"received_events_url": "https://api.github.com/users/nexon33/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 21
| 2025-01-13T21:50:44
| 2025-01-25T23:59:12
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The raspberry pi 5 is connected over wifi but this shouldn't be a problem since my other device pulls the models correctly on the same wifi.
Suggested fix: I currently have to constantly cancel the downloads with CTRL + C, this saves the download state and then continues where it left off, functioning as some kind of save/checkpoint. Maybe its possible to backup the download state once in a while so that it doesn't stop downloading on a crash?
Feel free to request more info or let me know if you want me to test!
Edit: sometimes the error "Error: context canceled" shows up and I think it might be related
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8406/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8406/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3048
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3048/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3048/comments
|
https://api.github.com/repos/ollama/ollama/issues/3048/events
|
https://github.com/ollama/ollama/pull/3048
| 2,177,918,776
|
PR_kwDOJ0Z1Ps5pLcWc
| 3,048
|
Harden for deps file being empty (or short)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-10T21:46:06
| 2024-03-10T22:22:10
| 2024-03-10T22:17:23
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3048",
"html_url": "https://github.com/ollama/ollama/pull/3048",
"diff_url": "https://github.com/ollama/ollama/pull/3048.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3048.patch",
"merged_at": "2024-03-10T22:17:23"
}
|
Breadcrumb....
Rosetta had a bug in 14.3 that causes our build to fail to generate the deps file properly since ldd fails with an error when trying to process the hip library dependency chain `cannot enable executable stack as shared object requires: Invalid argument`
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3048/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6951
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6951/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6951/comments
|
https://api.github.com/repos/ollama/ollama/issues/6951/events
|
https://github.com/ollama/ollama/issues/6951
| 2,547,346,208
|
I_kwDOJ0Z1Ps6X1Wsg
| 6,951
|
Can it support Windows Server 2012
|
{
"login": "lilinglin789",
"id": 171638450,
"node_id": "U_kgDOCjr-sg",
"avatar_url": "https://avatars.githubusercontent.com/u/171638450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lilinglin789",
"html_url": "https://github.com/lilinglin789",
"followers_url": "https://api.github.com/users/lilinglin789/followers",
"following_url": "https://api.github.com/users/lilinglin789/following{/other_user}",
"gists_url": "https://api.github.com/users/lilinglin789/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lilinglin789/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lilinglin789/subscriptions",
"organizations_url": "https://api.github.com/users/lilinglin789/orgs",
"repos_url": "https://api.github.com/users/lilinglin789/repos",
"events_url": "https://api.github.com/users/lilinglin789/events{/privacy}",
"received_events_url": "https://api.github.com/users/lilinglin789/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-25T08:36:47
| 2024-09-25T15:30:18
| 2024-09-25T15:30:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can it support Windows Server 2012
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6951/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7147
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7147/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7147/comments
|
https://api.github.com/repos/ollama/ollama/issues/7147/events
|
https://github.com/ollama/ollama/issues/7147
| 2,575,575,103
|
I_kwDOJ0Z1Ps6ZhCg_
| 7,147
|
Support for GrabbeAI
|
{
"login": "finnbusse",
"id": 110921874,
"node_id": "U_kgDOBpyIkg",
"avatar_url": "https://avatars.githubusercontent.com/u/110921874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/finnbusse",
"html_url": "https://github.com/finnbusse",
"followers_url": "https://api.github.com/users/finnbusse/followers",
"following_url": "https://api.github.com/users/finnbusse/following{/other_user}",
"gists_url": "https://api.github.com/users/finnbusse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/finnbusse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finnbusse/subscriptions",
"organizations_url": "https://api.github.com/users/finnbusse/orgs",
"repos_url": "https://api.github.com/users/finnbusse/repos",
"events_url": "https://api.github.com/users/finnbusse/events{/privacy}",
"received_events_url": "https://api.github.com/users/finnbusse/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 8
| 2024-10-09T11:00:32
| 2024-10-09T12:20:14
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
GrabbeAI is an AI model trained by 10th grade students of the german grammar school Grabbe-Gymnasium Detmold and helps users - especially students and teachers - with their (home)work! From what I know, it is the first LLM made entirely by students!
Since I personally trained the model with some other students with the (financial) assistance from some teachers, you can find the model both on Hugging Face and Ollama Model Hub.
The Hugging Face URL is: https://huggingface.co/grabbe-gymnasium-detmold/grabbe-ai
The Ollama hub URL is: https://ollama.com/grabbe-gymnasium/grabbe-ai
On Hugging Face, the model has about 9 million downloads, on Ollama hub it has more then 300, so I think it could be relevant for a huge group of people!
We trained the model based on real class test tasks and good homeworks and it works quiet fine. We are actively updating our LLM on Hugging Face and plan to publish a new build until December. This new release will double the trained dataset!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7147/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4450
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4450/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4450/comments
|
https://api.github.com/repos/ollama/ollama/issues/4450/events
|
https://github.com/ollama/ollama/issues/4450
| 2,297,645,509
|
I_kwDOJ0Z1Ps6I80nF
| 4,450
|
Resume a conversation started in Open Web-UI using ollama command line
|
{
"login": "tomav",
"id": 303803,
"node_id": "MDQ6VXNlcjMwMzgwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/303803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomav",
"html_url": "https://github.com/tomav",
"followers_url": "https://api.github.com/users/tomav/followers",
"following_url": "https://api.github.com/users/tomav/following{/other_user}",
"gists_url": "https://api.github.com/users/tomav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomav/subscriptions",
"organizations_url": "https://api.github.com/users/tomav/orgs",
"repos_url": "https://api.github.com/users/tomav/repos",
"events_url": "https://api.github.com/users/tomav/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomav/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-05-15T11:29:30
| 2024-05-26T11:15:09
| 2024-05-15T17:22:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi there!
I can't find information on this, let me know if there's actually a way to do this.
Thanks!
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4450/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/5476
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5476/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5476/comments
|
https://api.github.com/repos/ollama/ollama/issues/5476/events
|
https://github.com/ollama/ollama/issues/5476
| 2,389,917,662
|
I_kwDOJ0Z1Ps6Ocz_e
| 5,476
|
Scheduler attempts to load model split over cuda + rocm GPUs
|
{
"login": "sksonic",
"id": 1157854,
"node_id": "MDQ6VXNlcjExNTc4NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1157854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sksonic",
"html_url": "https://github.com/sksonic",
"followers_url": "https://api.github.com/users/sksonic/followers",
"following_url": "https://api.github.com/users/sksonic/following{/other_user}",
"gists_url": "https://api.github.com/users/sksonic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sksonic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sksonic/subscriptions",
"organizations_url": "https://api.github.com/users/sksonic/orgs",
"repos_url": "https://api.github.com/users/sksonic/repos",
"events_url": "https://api.github.com/users/sksonic/events{/privacy}",
"received_events_url": "https://api.github.com/users/sksonic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-07-04T04:34:49
| 2024-07-30T18:06:43
| 2024-07-30T18:06:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have two GPUs mixed nvidia (P40) + amd (RTX7900XTX).
I am able to load smaller models - these go to the P40 first. When loading a larger model than can fit on the P40, it seems the malloc operation is trying to allocate the full model size on the first GPU despite the "offload to cuda" logs pointing to a spread across the two GPUs.
Error is: allocating 39979.48 MiB on device 0: cudaMalloc failed: out of memory
```
ollama[8945]: time=2024-07-03T23:34:38.947+04:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=41,40 memory.available="[23.7 GiB 23.5 GiB]" memory.required.full="44.4 GiB" memory.required.partial="44.4 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[22.6 GiB 21.8 GiB]" memory.weights.total="39.5 GiB" memory.weights.repeating="38.7 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
ollama[8945]: time=2024-07-03T23:34:38.947+04:00 level=INFO source=server.go:368 msg="starting llama server" cmd="/tmp/ollama3425135909/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-6baa2a027ec7595d421d151fec74dd338a15acebb83e52510a67e08fa4dd7b71 --ctx-size 4000 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 1 --tensor-split 41,40 --tensor-split 41,40 --port 43421
...
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 39979.48 MiB on device 0: cudaMalloc failed: out of memory```
### OS
Linux
### GPU
Nvidia, AMD
### CPU
AMD
### Ollama version
0.1.48 (also had issue on previous version)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5476/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6840
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6840/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6840/comments
|
https://api.github.com/repos/ollama/ollama/issues/6840/events
|
https://github.com/ollama/ollama/issues/6840
| 2,531,427,671
|
I_kwDOJ0Z1Ps6W4oVX
| 6,840
|
no nvidia devices detected
|
{
"login": "deardeer7",
"id": 42965045,
"node_id": "MDQ6VXNlcjQyOTY1MDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/42965045?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deardeer7",
"html_url": "https://github.com/deardeer7",
"followers_url": "https://api.github.com/users/deardeer7/followers",
"following_url": "https://api.github.com/users/deardeer7/following{/other_user}",
"gists_url": "https://api.github.com/users/deardeer7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deardeer7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deardeer7/subscriptions",
"organizations_url": "https://api.github.com/users/deardeer7/orgs",
"repos_url": "https://api.github.com/users/deardeer7/repos",
"events_url": "https://api.github.com/users/deardeer7/events{/privacy}",
"received_events_url": "https://api.github.com/users/deardeer7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-09-17T15:11:34
| 2024-09-18T02:12:55
| 2024-09-18T02:12:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I deployed ollama with docker on a debian12 machine, but it can't use the GPU, it can only use the cpu for inference.
The ollama docker log:
```log
LAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-09-17T15:04:01.101Z level=INFO source=images.go:753 msg="total blobs: 11"
time=2024-09-17T15:04:01.101Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-17T15:04:01.101Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 0.3.10)"
time=2024-09-17T15:04:01.102Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama303164785/runners
time=2024-09-17T15:04:08.212Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm_v60102]"
time=2024-09-17T15:04:08.212Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-09-17T15:04:08.215Z level=INFO source=gpu.go:560 msg="no nvidia devices detected" library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.107.02
time=2024-09-17T15:04:08.220Z level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
time=2024-09-17T15:04:08.220Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.3 GiB" available="29.4 GiB"
```
My GPU is a GTX 1060, driver is installed, nvidia-smi:
```
Tue Sep 17 23:09:22 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.107.02 Driver Version: 550.107.02 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 1060 6GB Off | 00000000:2B:00.0 Off | N/A |
| 57% 43C P0 28W / 120W | 0MiB / 6144MiB | 2% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
```
docker compose:
```
version: '3.8'
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
ports:
- "11434:11434"
environment:
- OLLAMA_ORIGINS=*
volumes:
- /vol1/1000/os/docker/ollama:/root/.ollama
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
restart: always
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
latest
|
{
"login": "deardeer7",
"id": 42965045,
"node_id": "MDQ6VXNlcjQyOTY1MDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/42965045?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deardeer7",
"html_url": "https://github.com/deardeer7",
"followers_url": "https://api.github.com/users/deardeer7/followers",
"following_url": "https://api.github.com/users/deardeer7/following{/other_user}",
"gists_url": "https://api.github.com/users/deardeer7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deardeer7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deardeer7/subscriptions",
"organizations_url": "https://api.github.com/users/deardeer7/orgs",
"repos_url": "https://api.github.com/users/deardeer7/repos",
"events_url": "https://api.github.com/users/deardeer7/events{/privacy}",
"received_events_url": "https://api.github.com/users/deardeer7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6840/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/747
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/747/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/747/comments
|
https://api.github.com/repos/ollama/ollama/issues/747/events
|
https://github.com/ollama/ollama/pull/747
| 1,934,240,450
|
PR_kwDOJ0Z1Ps5cUsNx
| 747
|
Don't assume download has started if cancelled during preparation
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-10T03:18:35
| 2023-10-10T17:12:30
| 2023-10-10T17:12:29
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/747",
"html_url": "https://github.com/ollama/ollama/pull/747",
"diff_url": "https://github.com/ollama/ollama/pull/747.diff",
"patch_url": "https://github.com/ollama/ollama/pull/747.patch",
"merged_at": "2023-10-10T17:12:29"
}
|
Discards download progress if cancelled (e.g. by `ctrl+c`) while preparing to download
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/747/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8221
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8221/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8221/comments
|
https://api.github.com/repos/ollama/ollama/issues/8221/events
|
https://github.com/ollama/ollama/issues/8221
| 2,756,603,179
|
I_kwDOJ0Z1Ps6kTm0r
| 8,221
|
Model pull fail due to captive portal
|
{
"login": "startreered",
"id": 192638450,
"node_id": "U_kgDOC3tt8g",
"avatar_url": "https://avatars.githubusercontent.com/u/192638450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/startreered",
"html_url": "https://github.com/startreered",
"followers_url": "https://api.github.com/users/startreered/followers",
"following_url": "https://api.github.com/users/startreered/following{/other_user}",
"gists_url": "https://api.github.com/users/startreered/gists{/gist_id}",
"starred_url": "https://api.github.com/users/startreered/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/startreered/subscriptions",
"organizations_url": "https://api.github.com/users/startreered/orgs",
"repos_url": "https://api.github.com/users/startreered/repos",
"events_url": "https://api.github.com/users/startreered/events{/privacy}",
"received_events_url": "https://api.github.com/users/startreered/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-12-23T19:22:10
| 2024-12-23T20:00:50
| 2024-12-23T19:59:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm on a network with captive portals that offer 'coaching' pages for certain content. I believe that Ollama is getting a coaching a page when pulling a model, but I can't determine what URL models are being pulled from. If I knew the URL I could navigate there in a browser and accept the captive portal for the day. Additionally, if I knew the necessary URLs I could ask for them to be whitelisted.
'''
>ollama run nemotron
pulling manifest
pulling c147388e9931... 0% ▕ ▏ 0 B/ 42 GB
Error: max retries exceeded: EOF
'''
When I pull a model from huggingface I can manually navigate to the model page first, accepting the captive portal and the model pulls successfully.
I asked on the discord and searched the ollama github but haven't found an answer yet.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "startreered",
"id": 192638450,
"node_id": "U_kgDOC3tt8g",
"avatar_url": "https://avatars.githubusercontent.com/u/192638450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/startreered",
"html_url": "https://github.com/startreered",
"followers_url": "https://api.github.com/users/startreered/followers",
"following_url": "https://api.github.com/users/startreered/following{/other_user}",
"gists_url": "https://api.github.com/users/startreered/gists{/gist_id}",
"starred_url": "https://api.github.com/users/startreered/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/startreered/subscriptions",
"organizations_url": "https://api.github.com/users/startreered/orgs",
"repos_url": "https://api.github.com/users/startreered/repos",
"events_url": "https://api.github.com/users/startreered/events{/privacy}",
"received_events_url": "https://api.github.com/users/startreered/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8221/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3470
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3470/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3470/comments
|
https://api.github.com/repos/ollama/ollama/issues/3470/events
|
https://github.com/ollama/ollama/pull/3470
| 2,221,830,314
|
PR_kwDOJ0Z1Ps5rghLe
| 3,470
|
cmd: provide feedback if OLLAMA_MODELS is set on `run` command
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-03T03:50:34
| 2024-04-03T05:11:14
| 2024-04-03T05:11:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3470",
"html_url": "https://github.com/ollama/ollama/pull/3470",
"diff_url": "https://github.com/ollama/ollama/pull/3470.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3470.patch",
"merged_at": "2024-04-03T05:11:13"
}
|
This also moves the checkServerHeartbeat call out of the "RunE" Cobra stuff (that's the only word I have for that) to on-site where it's after the check for OLLAMA_MODELS, which allows the helpful error message to be printed before the server heartbeat check. This also arguably makes the code more readable without the magic/superfluous "pre" function caller.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3470/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2670
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2670/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2670/comments
|
https://api.github.com/repos/ollama/ollama/issues/2670/events
|
https://github.com/ollama/ollama/issues/2670
| 2,148,451,340
|
I_kwDOJ0Z1Ps6ADsQM
| 2,670
|
Build Cuda ready Docker image
|
{
"login": "thiner",
"id": 1897227,
"node_id": "MDQ6VXNlcjE4OTcyMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1897227?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thiner",
"html_url": "https://github.com/thiner",
"followers_url": "https://api.github.com/users/thiner/followers",
"following_url": "https://api.github.com/users/thiner/following{/other_user}",
"gists_url": "https://api.github.com/users/thiner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thiner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thiner/subscriptions",
"organizations_url": "https://api.github.com/users/thiner/orgs",
"repos_url": "https://api.github.com/users/thiner/repos",
"events_url": "https://api.github.com/users/thiner/events{/privacy}",
"received_events_url": "https://api.github.com/users/thiner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-02-22T07:58:58
| 2024-05-04T16:01:25
| 2024-03-11T22:40:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, the official ollama container image doesn't contain necessary cuda libraries. This is really inconvenient when run it on server. I see you have provided [rocm] images for AMD gpus, can you also provide cuda ready images? If that's not feasible, how about provide the specific Dockerfile?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2670/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7011
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7011/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7011/comments
|
https://api.github.com/repos/ollama/ollama/issues/7011/events
|
https://github.com/ollama/ollama/issues/7011
| 2,553,852,526
|
I_kwDOJ0Z1Ps6YOLJu
| 7,011
|
ollama run llama3.2 --- Error: exception done_getting_tensors: wrong number of tensors; expected 255, got 254
|
{
"login": "andytriboletti",
"id": 78852,
"node_id": "MDQ6VXNlcjc4ODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/78852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andytriboletti",
"html_url": "https://github.com/andytriboletti",
"followers_url": "https://api.github.com/users/andytriboletti/followers",
"following_url": "https://api.github.com/users/andytriboletti/following{/other_user}",
"gists_url": "https://api.github.com/users/andytriboletti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andytriboletti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andytriboletti/subscriptions",
"organizations_url": "https://api.github.com/users/andytriboletti/orgs",
"repos_url": "https://api.github.com/users/andytriboletti/repos",
"events_url": "https://api.github.com/users/andytriboletti/events{/privacy}",
"received_events_url": "https://api.github.com/users/andytriboletti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-09-27T22:55:13
| 2024-10-03T19:51:45
| 2024-10-03T19:51:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I ran ollama run llama3.2
I got this error:
Error: exception done_getting_tensors: wrong number of tensors; expected 255, got 254
Ollama version reports:
ollama -v
ollama version is 0.1.30
Warning: client version is 0.2.8
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.30
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7011/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1682
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1682/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1682/comments
|
https://api.github.com/repos/ollama/ollama/issues/1682/events
|
https://github.com/ollama/ollama/issues/1682
| 2,054,587,188
|
I_kwDOJ0Z1Ps56doM0
| 1,682
|
Importing (PyTorch & Safetensors)
|
{
"login": "ForkedInTime",
"id": 22755327,
"node_id": "MDQ6VXNlcjIyNzU1MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/22755327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ForkedInTime",
"html_url": "https://github.com/ForkedInTime",
"followers_url": "https://api.github.com/users/ForkedInTime/followers",
"following_url": "https://api.github.com/users/ForkedInTime/following{/other_user}",
"gists_url": "https://api.github.com/users/ForkedInTime/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ForkedInTime/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ForkedInTime/subscriptions",
"organizations_url": "https://api.github.com/users/ForkedInTime/orgs",
"repos_url": "https://api.github.com/users/ForkedInTime/repos",
"events_url": "https://api.github.com/users/ForkedInTime/events{/privacy}",
"received_events_url": "https://api.github.com/users/ForkedInTime/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2023-12-23T00:04:01
| 2024-03-12T21:37:05
| 2024-03-12T21:37:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Step 1 ok from section: "Importing (PyTorch & Safetensors)"
Step 2 fails with docker command:
"yetipaw@dolphin ~ cd Apps/dolphin-2.5-mixtral-8x7b
yetipaw@dolphin ~/Apps/dolphin-2.5-mixtral-8x7b main docker run --rm -v .:/model ollama/quantize -q q4_0 /model
**__unknown_ architecture MixtralForCausalLM__**
yetipaw@dolphin ~/Apps/dolphin-2.5-mixtral-8x7b main
"
Architecture is defined ok in config.json:
"yetipaw@dolphin ~/Apps/dolphin-2.5-mixtral-8x7b main ls
added_tokens.json pytorch_model-00004-of-00019.bin pytorch_model-00012-of-00019.bin pytorch_model.bin.index.json
config.json pytorch_model-00005-of-00019.bin pytorch_model-00013-of-00019.bin README.md
configs pytorch_model-00006-of-00019.bin pytorch_model-00014-of-00019.bin special_tokens_map.json
generation_config.json pytorch_model-00007-of-00019.bin pytorch_model-00015-of-00019.bin tokenizer_config.json
Modelfile pytorch_model-00008-of-00019.bin pytorch_model-00016-of-00019.bin tokenizer.model
pytorch_model-00001-of-00019.bin pytorch_model-00009-of-00019.bin pytorch_model-00017-of-00019.bin
pytorch_model-00002-of-00019.bin pytorch_model-00010-of-00019.bin pytorch_model-00018-of-00019.bin
pytorch_model-00003-of-00019.bin pytorch_model-00011-of-00019.bin pytorch_model-00019-of-00019.bin
"
yetipaw@dolphin ~/Apps/dolphin-2.5-mixtral-8x7b main cat config.json
{
"_name_or_path": "/workspace/models/Mixtral-8x7B-v0.1",
"architectures": [
**"MixtralForCausalLM"**
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 32000,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mixtral",
"num_attention_heads": 32,
"num_experts_per_tok": 2,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"num_local_experts": 8,
"output_router_logits": false,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"router_aux_loss_coef": 0.02,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.36.0.dev0",
"use_cache": false,
"vocab_size": 32002
}
Please advise.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1682/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/1682/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/337
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/337/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/337/comments
|
https://api.github.com/repos/ollama/ollama/issues/337/events
|
https://github.com/ollama/ollama/issues/337
| 1,848,242,712
|
I_kwDOJ0Z1Ps5uKfIY
| 337
|
Re-use already loaded model if only the prompt changes
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2023-08-12T21:12:14
| 2023-10-19T14:40:00
| 2023-10-19T14:40:00
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If only the prompt (or other model-independent data) changes, then the model should stay loaded vs being reloaded
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/337/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/337/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8120
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8120/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8120/comments
|
https://api.github.com/repos/ollama/ollama/issues/8120/events
|
https://github.com/ollama/ollama/pull/8120
| 2,742,943,707
|
PR_kwDOJ0Z1Ps6FYwas
| 8,120
|
Add an ollama example that enables users to chat with a code generation model and then tests the code generated by the model #8090
|
{
"login": "jagane",
"id": 749416,
"node_id": "MDQ6VXNlcjc0OTQxNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/749416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jagane",
"html_url": "https://github.com/jagane",
"followers_url": "https://api.github.com/users/jagane/followers",
"following_url": "https://api.github.com/users/jagane/following{/other_user}",
"gists_url": "https://api.github.com/users/jagane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jagane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jagane/subscriptions",
"organizations_url": "https://api.github.com/users/jagane/orgs",
"repos_url": "https://api.github.com/users/jagane/repos",
"events_url": "https://api.github.com/users/jagane/events{/privacy}",
"received_events_url": "https://api.github.com/users/jagane/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-12-16T17:11:44
| 2025-01-14T22:05:10
| 2025-01-14T22:05:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8120",
"html_url": "https://github.com/ollama/ollama/pull/8120",
"diff_url": "https://github.com/ollama/ollama/pull/8120.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8120.patch",
"merged_at": null
}
|
Often times, code generated by code generation models such as qwen2.5-coder:7b is not perfect. It may not even be syntactically correct. I am proposing to add an example program, possibly derived from the the python-simplechat example, that extracts the code generated by the model and runs/tests it. The first iteration would only know to run python programs.
The name python-code-iterate sounds like a reasonable name for this example program
User enters a prompt e.g. write a python program to quantize a model stored in my local dir /home/userabc/model1
The model generates code for this task and provides it as part of its response, i.e. the 'assistant' message
Our new program python-code-iterate then asks the user whether to run the program and check output
If the user says yes, then python-code-iterate will use a subprocess to run the program and check return code, stdout and stderr
If the return code is non zero, then the stderr contents will be added as a user message and the chat will continue
repeat steps 2 to 5 till satisfactory results are obtained
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8120/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8610
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8610/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8610/comments
|
https://api.github.com/repos/ollama/ollama/issues/8610/events
|
https://github.com/ollama/ollama/issues/8610
| 2,813,512,486
|
I_kwDOJ0Z1Ps6nsssm
| 8,610
|
Add the ability to import from gguf directly without a Modelfile
|
{
"login": "LeC-D",
"id": 17554693,
"node_id": "MDQ6VXNlcjE3NTU0Njkz",
"avatar_url": "https://avatars.githubusercontent.com/u/17554693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeC-D",
"html_url": "https://github.com/LeC-D",
"followers_url": "https://api.github.com/users/LeC-D/followers",
"following_url": "https://api.github.com/users/LeC-D/following{/other_user}",
"gists_url": "https://api.github.com/users/LeC-D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeC-D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeC-D/subscriptions",
"organizations_url": "https://api.github.com/users/LeC-D/orgs",
"repos_url": "https://api.github.com/users/LeC-D/repos",
"events_url": "https://api.github.com/users/LeC-D/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeC-D/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 0
| 2025-01-27T16:42:27
| 2025-01-27T17:11:11
| 2025-01-27T17:11:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "LeC-D",
"id": 17554693,
"node_id": "MDQ6VXNlcjE3NTU0Njkz",
"avatar_url": "https://avatars.githubusercontent.com/u/17554693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeC-D",
"html_url": "https://github.com/LeC-D",
"followers_url": "https://api.github.com/users/LeC-D/followers",
"following_url": "https://api.github.com/users/LeC-D/following{/other_user}",
"gists_url": "https://api.github.com/users/LeC-D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeC-D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeC-D/subscriptions",
"organizations_url": "https://api.github.com/users/LeC-D/orgs",
"repos_url": "https://api.github.com/users/LeC-D/repos",
"events_url": "https://api.github.com/users/LeC-D/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeC-D/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8610/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4632
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4632/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4632/comments
|
https://api.github.com/repos/ollama/ollama/issues/4632/events
|
https://github.com/ollama/ollama/pull/4632
| 2,316,901,295
|
PR_kwDOJ0Z1Ps5wi95F
| 4,632
|
make cache_prompt as an option
|
{
"login": "Windfarer",
"id": 7036121,
"node_id": "MDQ6VXNlcjcwMzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7036121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Windfarer",
"html_url": "https://github.com/Windfarer",
"followers_url": "https://api.github.com/users/Windfarer/followers",
"following_url": "https://api.github.com/users/Windfarer/following{/other_user}",
"gists_url": "https://api.github.com/users/Windfarer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Windfarer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Windfarer/subscriptions",
"organizations_url": "https://api.github.com/users/Windfarer/orgs",
"repos_url": "https://api.github.com/users/Windfarer/repos",
"events_url": "https://api.github.com/users/Windfarer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Windfarer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 14
| 2024-05-25T10:14:11
| 2024-09-26T18:28:13
| 2024-09-26T18:28:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4632",
"html_url": "https://github.com/ollama/ollama/pull/4632",
"diff_url": "https://github.com/ollama/ollama/pull/4632.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4632.patch",
"merged_at": null
}
|
When we request as [request-reproducible-outputs](https://github.com/ollama/ollama/blob/main/docs/api.md#request-reproducible-outputs), the responses are not reproducible.
It's easy to reproduce this issue, all the following steps should set `temperature=0` and `seed=1`
1. request with prompt A
2. request with prompt B
3. request with prompt A
4. request with prompt A
5. request with prompt A
We will find the 1,3,4 outputs are different, and 4,5 outputs are same.
I made some tests with the llama.cpp, found that when the `cache_prompt` enabled, the response will be affected by the previous input.
So, we should make the `cache_prompt` as an option, allow users to disable it when they need reproducible outputs.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4632/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4632/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7784
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7784/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7784/comments
|
https://api.github.com/repos/ollama/ollama/issues/7784/events
|
https://github.com/ollama/ollama/pull/7784
| 2,680,395,130
|
PR_kwDOJ0Z1Ps6CtMj-
| 7,784
|
docs: remove tutorials, add cloud section to community integrations
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-11-21T17:58:10
| 2024-12-02T03:34:23
| 2024-11-21T17:59:54
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7784",
"html_url": "https://github.com/ollama/ollama/pull/7784",
"diff_url": "https://github.com/ollama/ollama/pull/7784.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7784.patch",
"merged_at": "2024-11-21T17:59:54"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7784/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8073
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8073/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8073/comments
|
https://api.github.com/repos/ollama/ollama/issues/8073/events
|
https://github.com/ollama/ollama/pull/8073
| 2,736,380,450
|
PR_kwDOJ0Z1Ps6FC056
| 8,073
|
llama: Fix the KV cache quants q4_0 and q8_0 lead server abort in large context chat.
|
{
"login": "mengqin",
"id": 7312672,
"node_id": "MDQ6VXNlcjczMTI2NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7312672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mengqin",
"html_url": "https://github.com/mengqin",
"followers_url": "https://api.github.com/users/mengqin/followers",
"following_url": "https://api.github.com/users/mengqin/following{/other_user}",
"gists_url": "https://api.github.com/users/mengqin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mengqin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mengqin/subscriptions",
"organizations_url": "https://api.github.com/users/mengqin/orgs",
"repos_url": "https://api.github.com/users/mengqin/repos",
"events_url": "https://api.github.com/users/mengqin/events{/privacy}",
"received_events_url": "https://api.github.com/users/mengqin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2024-12-12T16:34:28
| 2025-01-23T13:53:58
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8073",
"html_url": "https://github.com/ollama/ollama/pull/8073",
"diff_url": "https://github.com/ollama/ollama/pull/8073.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8073.patch",
"merged_at": null
}
|
This fix include two parts which cause the problem, first the context switch need a correct operation for the q4_0 to f32 temp cache tensor. So added new function in ggmal_cuda_cpy and ggml_cuda_dup to process with it.
The second part happened in ggml_compute_forward_dup operation which also need a correct way to process with q4_0 to f32 and q8_0 to f32, add them too.
This is a fix of https://github.com/ollama/ollama/issues/7938
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8073/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3457
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3457/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3457/comments
|
https://api.github.com/repos/ollama/ollama/issues/3457/events
|
https://github.com/ollama/ollama/pull/3457
| 2,220,620,261
|
PR_kwDOJ0Z1Ps5rcXlg
| 3,457
|
docs: added view logs instruction for Ollama via brew
|
{
"login": "nicholaslck",
"id": 52269659,
"node_id": "MDQ6VXNlcjUyMjY5NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52269659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicholaslck",
"html_url": "https://github.com/nicholaslck",
"followers_url": "https://api.github.com/users/nicholaslck/followers",
"following_url": "https://api.github.com/users/nicholaslck/following{/other_user}",
"gists_url": "https://api.github.com/users/nicholaslck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicholaslck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicholaslck/subscriptions",
"organizations_url": "https://api.github.com/users/nicholaslck/orgs",
"repos_url": "https://api.github.com/users/nicholaslck/repos",
"events_url": "https://api.github.com/users/nicholaslck/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicholaslck/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-02T14:04:44
| 2024-11-21T09:28:38
| 2024-11-21T09:28:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3457",
"html_url": "https://github.com/ollama/ollama/pull/3457",
"diff_url": "https://github.com/ollama/ollama/pull/3457.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3457.patch",
"merged_at": null
}
|
For users that installed Ollama via brew and run via `brew services`, the log file locates in different place.
I propose to update the troubleshoot doc to align such behaviour.
This PR use the log file path specified in this [spec](https://formulae.brew.sh/api/formula/ollama.json) (you can search "log_path" in the JSON file)
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3457/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5437
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5437/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5437/comments
|
https://api.github.com/repos/ollama/ollama/issues/5437/events
|
https://github.com/ollama/ollama/issues/5437
| 2,386,642,674
|
I_kwDOJ0Z1Ps6OQUby
| 5,437
|
502 Bad Gateway
|
{
"login": "jacktang",
"id": 44341,
"node_id": "MDQ6VXNlcjQ0MzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/44341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacktang",
"html_url": "https://github.com/jacktang",
"followers_url": "https://api.github.com/users/jacktang/followers",
"following_url": "https://api.github.com/users/jacktang/following{/other_user}",
"gists_url": "https://api.github.com/users/jacktang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacktang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacktang/subscriptions",
"organizations_url": "https://api.github.com/users/jacktang/orgs",
"repos_url": "https://api.github.com/users/jacktang/repos",
"events_url": "https://api.github.com/users/jacktang/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacktang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-07-02T16:07:17
| 2024-07-03T01:06:56
| 2024-07-03T01:06:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello,
I installed ollama and pull models, but I can't access the endpoint. It always returns 502 bad gateway error
```
$ wget http://localhost:11434
--2024-07-02 11:01:17-- http://localhost:11434/
Connecting to 10.0.5.8:30890... connected.
Proxy request sent, awaiting response... 502 Bad Gateway
2024-07-02 11:01:17 ERROR 502: Bad Gateway.
```
I also checked the server log
```
-- Logs begin at Sun 2024-06-23 16:41:51 CDT, end at Tue 2024-07-02 10:57:11 CDT. --
Jul 02 10:03:39 dev-MS-7D25 systemd[1]: Started Ollama Service.
Jul 02 10:03:39 dev-MS-7D25 ollama[3813757]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Jul 02 10:03:39 dev-MS-7D25 ollama[3813757]: Your new public key is:
Jul 02 10:03:39 dev-MS-7D25 ollama[3813757]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIjqR5td3RHcOoXOMD3oE18JYaOWI2CAZh30fyU213B9
Jul 02 10:03:39 dev-MS-7D25 ollama[3813757]: 2024/07/02 10:03:39 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OL>
Jul 02 10:03:39 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:03:39.049-05:00 level=INFO source=images.go:730 msg="total blobs: 0"
Jul 02 10:03:39 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:03:39.049-05:00 level=INFO source=images.go:737 msg="total unused blobs removed: 0"
Jul 02 10:03:39 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:03:39.049-05:00 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)"
Jul 02 10:03:39 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:03:39.049-05:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3128667190/runners
Jul 02 10:03:40 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:03:40.623-05:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60101 cpu cpu_avx]"
Jul 02 10:03:40 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:03:40.732-05:00 level=INFO source=types.go:98 msg="inference compute" id=GPU-8af1e031-b435-37c4-9cda-3f8db6d5ed54 library=cuda compute=8>
Jul 02 10:05:07 dev-MS-7D25 ollama[3813757]: [GIN] 2024/07/02 - 10:05:07 | 200 | 61.718µs | 127.0.0.1 | HEAD "/"
Jul 02 10:05:10 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:05:10.114-05:00 level=INFO source=download.go:136 msg="downloading 6a0746a1ec1a in 47 100 MB part(s)"
Jul 02 10:05:37 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:05:37.115-05:00 level=INFO source=download.go:251 msg="6a0746a1ec1a part 12 stalled; retrying. If this persists, press ctrl-c to exit, t>
Jul 02 10:06:50 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:06:50.115-05:00 level=INFO source=download.go:251 msg="6a0746a1ec1a part 30 stalled; retrying. If this persists, press ctrl-c to exit, t>
Jul 02 10:07:08 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:07:08.115-05:00 level=INFO source=download.go:251 msg="6a0746a1ec1a part 40 stalled; retrying. If this persists, press ctrl-c to exit, t>
Jul 02 10:07:09 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:07:09.312-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 21 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:07:13 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:07:13.115-05:00 level=INFO source=download.go:251 msg="6a0746a1ec1a part 14 stalled; retrying. If this persists, press ctrl-c to exit, t>
Jul 02 10:07:14 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:07:14.312-05:00 level=INFO source=download.go:251 msg="6a0746a1ec1a part 21 stalled; retrying. If this persists, press ctrl-c to exit, t>
Jul 02 10:07:18 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:07:18.679-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 0 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:07:20 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:07:20.680-05:00 level=INFO source=download.go:251 msg="6a0746a1ec1a part 0 stalled; retrying. If this persists, press ctrl-c to exit, th>
Jul 02 10:07:33 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:07:33.690-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 13 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:08:15 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:08:15.314-05:00 level=INFO source=download.go:251 msg="6a0746a1ec1a part 21 stalled; retrying. If this persists, press ctrl-c to exit, t>
Jul 02 10:08:20 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:08:20.043-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 10 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:08:22 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:08:22.044-05:00 level=INFO source=download.go:251 msg="6a0746a1ec1a part 10 stalled; retrying. If this persists, press ctrl-c to exit, t>
Jul 02 10:08:38 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:08:38.903-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 35 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:08:50 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:08:50.549-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 14 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:09:54 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:09:54.227-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 9 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:10:46 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:10:46.195-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 1 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:10:57 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:10:57.841-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 23 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:11:01 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:11:01.784-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 11 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:11:03 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:11:03.488-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 22 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:11:11 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:11:11.116-05:00 level=INFO source=download.go:251 msg="6a0746a1ec1a part 12 stalled; retrying. If this persists, press ctrl-c to exit, t>
Jul 02 10:13:12 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:13:12.970-05:00 level=INFO source=download.go:178 msg="6a0746a1ec1a part 18 attempt 0 failed: unexpected EOF, retrying in 1s"
Jul 02 10:13:27 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:13:27.065-05:00 level=INFO source=download.go:136 msg="downloading 4fa551d4f938 in 1 12 KB part(s)"
Jul 02 10:13:30 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:13:30.160-05:00 level=INFO source=download.go:136 msg="downloading 8ab4849b038c in 1 254 B part(s)"
Jul 02 10:13:33 dev-MS-7D25 ollama[3813757]: time=2024-07-02T10:13:33.501-05:00 level=INFO source=download.go:136 msg="downloading 577073ffcc6c in 1 110 B part(s)"
```
The message `unexpected EOF` means the model is not completed pulled?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "jacktang",
"id": 44341,
"node_id": "MDQ6VXNlcjQ0MzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/44341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacktang",
"html_url": "https://github.com/jacktang",
"followers_url": "https://api.github.com/users/jacktang/followers",
"following_url": "https://api.github.com/users/jacktang/following{/other_user}",
"gists_url": "https://api.github.com/users/jacktang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacktang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacktang/subscriptions",
"organizations_url": "https://api.github.com/users/jacktang/orgs",
"repos_url": "https://api.github.com/users/jacktang/repos",
"events_url": "https://api.github.com/users/jacktang/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacktang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5437/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8577
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8577/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8577/comments
|
https://api.github.com/repos/ollama/ollama/issues/8577/events
|
https://github.com/ollama/ollama/issues/8577
| 2,810,812,951
|
I_kwDOJ0Z1Ps6niZoX
| 8,577
|
Context caching in RAM
|
{
"login": "JKratto",
"id": 17408704,
"node_id": "MDQ6VXNlcjE3NDA4NzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/17408704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JKratto",
"html_url": "https://github.com/JKratto",
"followers_url": "https://api.github.com/users/JKratto/followers",
"following_url": "https://api.github.com/users/JKratto/following{/other_user}",
"gists_url": "https://api.github.com/users/JKratto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JKratto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JKratto/subscriptions",
"organizations_url": "https://api.github.com/users/JKratto/orgs",
"repos_url": "https://api.github.com/users/JKratto/repos",
"events_url": "https://api.github.com/users/JKratto/events{/privacy}",
"received_events_url": "https://api.github.com/users/JKratto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 3
| 2025-01-25T06:52:14
| 2025-01-25T11:49:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have been thinking about the possibility of caching computed context into RAM. Let me explain, how it could be helpful on cheaper HW, that many of us are running.
The challenges:
1) you have limited VRAM. You have to split it between model and context cache (context length). As you are using the server alone (or with a few friends), you do not really need concurency, so you allow only one context to be stored. That means more VRAM for model and 1 context.
2) you sometimes want to chat about different things in different chats (especially when using WebUI), seems like kind of a natural workflow.
3) you sometimes want some inference at roughly the same time as your friend (pretty similar from the ollama standpoint)
4) whenever you send a prompt, it gets ingested and you get the response. When you send response to that, the context is already cached = immediate inference.
5) when you switch chats or a friend starts their own conversation, the pre-computed context is gone.
6) especially for longer conversations (e.g. coding or deep philosophical discussions idk) it takes long for the whole conversation to be ingested again, so that next response could be inferenced.
Here comes the idea of the context caching to RAM to the rescue. RAM is relatively cheap and CPU/RAM inference is usually slow compared to GPU/VRAM (most of us do not have Genoas with lot of memory bandwidth). My proposal:
1) allow to allocate memory in RAM specifically for a specified number of context caches.
2) the "normal" number of contexts would still be allocated in VRAM, it needs to be accessible with high bandwidth.
3) keep sort of a FIFO in RAM (if you choose to allocate more space for context cache in RAM), so that the oldest contexts are discarded first
4) when you detect that there is a continuation of previous conversation, first check if the pre computed context is cached, if so, load it into VRAM basically instantly (versus computing it again for longer contexts..)
What do you think? Is it doable? Is it a silly idea? Are there limitations I do not see? Is it "too much my usecase"? I do not really know! :)
Last but not least, thank you all for your wonderful work! <3
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8577/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/8577/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/307
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/307/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/307/comments
|
https://api.github.com/repos/ollama/ollama/issues/307/events
|
https://github.com/ollama/ollama/pull/307
| 1,840,343,356
|
PR_kwDOJ0Z1Ps5XYfDd
| 307
|
quantize f32, f16
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-08-07T23:38:20
| 2023-08-30T15:55:37
| 2023-08-30T15:55:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/307",
"html_url": "https://github.com/ollama/ollama/pull/307",
"diff_url": "https://github.com/ollama/ollama/pull/307.diff",
"patch_url": "https://github.com/ollama/ollama/pull/307.patch",
"merged_at": null
}
|
if the input model in a modelfile is a ggml f32 or f16 file type, and the `FROM` line contains the `AS` keyword, quantize the model to the specified level
Example Modelfile:
```
FROM /path/to/my/f32.bin AS Q4_0
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/307/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8021
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8021/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8021/comments
|
https://api.github.com/repos/ollama/ollama/issues/8021/events
|
https://github.com/ollama/ollama/issues/8021
| 2,729,058,863
|
I_kwDOJ0Z1Ps6iqiIv
| 8,021
|
Incorrect configuration in EXAONE 3.5
|
{
"login": "lgai-exaone",
"id": 176995546,
"node_id": "U_kgDOCoy82g",
"avatar_url": "https://avatars.githubusercontent.com/u/176995546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lgai-exaone",
"html_url": "https://github.com/lgai-exaone",
"followers_url": "https://api.github.com/users/lgai-exaone/followers",
"following_url": "https://api.github.com/users/lgai-exaone/following{/other_user}",
"gists_url": "https://api.github.com/users/lgai-exaone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lgai-exaone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lgai-exaone/subscriptions",
"organizations_url": "https://api.github.com/users/lgai-exaone/orgs",
"repos_url": "https://api.github.com/users/lgai-exaone/repos",
"events_url": "https://api.github.com/users/lgai-exaone/events{/privacy}",
"received_events_url": "https://api.github.com/users/lgai-exaone/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-12-10T05:33:09
| 2024-12-19T02:03:36
| 2024-12-10T08:00:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
We recently published EXAONE 3.5, and we appreciate your quick support of EXAONE 3.5 in Ollama.
https://ollama.com/library/exaone3.5
However, we found that the applied template differs from the original template.
We checked the prompt template and determined it should be modified as below:
```
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{ if eq .Role "system" }}[|system|]{{ .Content }}[|endofturn|]
{{ continue }}
{{ else if eq .Role "user" }}[|user|]{{ .Content }}
{{ else if eq .Role "assistant" }}[|assistant|]{{ .Content }}[|endofturn|]
{{ end }}
{{- if and (ne .Role "assistant") $last }}[|assistant|]{{ end }}
{{- end -}}
```
Could you update the template for EXAONE 3.5 in the ollama library?
Additionally, is there a method to configure generation settings (e.g. stop words, repetition penalty) for the model in the ollama library? we need to set certain parameters to avoid performance degradation.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8021/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4856
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4856/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4856/comments
|
https://api.github.com/repos/ollama/ollama/issues/4856/events
|
https://github.com/ollama/ollama/issues/4856
| 2,338,310,124
|
I_kwDOJ0Z1Ps6LX8fs
| 4,856
|
OLLAMA_MODELS is broken in 0.1.41
|
{
"login": "rcarmo",
"id": 392683,
"node_id": "MDQ6VXNlcjM5MjY4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/392683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcarmo",
"html_url": "https://github.com/rcarmo",
"followers_url": "https://api.github.com/users/rcarmo/followers",
"following_url": "https://api.github.com/users/rcarmo/following{/other_user}",
"gists_url": "https://api.github.com/users/rcarmo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcarmo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcarmo/subscriptions",
"organizations_url": "https://api.github.com/users/rcarmo/orgs",
"repos_url": "https://api.github.com/users/rcarmo/repos",
"events_url": "https://api.github.com/users/rcarmo/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcarmo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-06-06T13:46:30
| 2024-06-13T19:54:25
| 2024-06-13T19:54:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am running Fedora Silverblue (with an immutable filesystem), so I have long set ollama to run with my own systemd unit:
```ini
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
Restart=always
RestartSec=3
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_MODELS=/var/mnt/models/ollama"
Environment="PATH=/var/home/me/.local/bin:/var/home/me/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin"
ExecStart=/usr/local/bin/ollama serve
[Install]
WantedBy=default.target
```
Upgrading to `0.1.41` broke this spectacularl, because it seems to have stopped using environment variables at all. The config above used to force ollama to look for its data under `/var/mnt/models`, but it doesn't. What I get is this:
```bash
make restart
systemctl --user restart ollama
systemctl --user status ollama
● ollama.service - Ollama Service
Loaded: loaded (/etc/systemd/user/ollama.service; enabled; preset: disabled)
Drop-In: /usr/lib/systemd/user/service.d
└─10-timeout-abort.conf
Active: active (running) since Thu 2024-06-06 14:41:28 WEST; 5ms ago
Main PID: 8099 (ollama)
Tasks: 5 (limit: 38341)
Memory: 2.9M (peak: 3.1M)
CPU: 5ms
CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/ollama.service
└─8099 /usr/local/bin/ollama serve
Jun 06 14:41:28 silverblue systemd[7552]: Started ollama.service - Ollama Service.
me@silverblue:~/.ollama$ make logs
journalctl -fu ollama
Jun 06 14:30:27 silverblue systemd[1]: Started ollama.service - Ollama Service.
Jun 06 14:30:27 silverblue ollama[7830]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Jun 06 14:30:27 silverblue ollama[7830]: Error: could not create directory mkdir /usr/share/ollama: read-only file system
Jun 06 14:30:27 silverblue systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jun 06 14:30:27 silverblue systemd[1]: ollama.service: Failed with result 'exit-code'.
Jun 06 14:30:27 silverblue systemd[1]: Stopped ollama.service - Ollama Service.
```
I have a Makefile that handles upgrading and switching to my custom unit file above that works like this:
```makefile
upgrade:
curl -fsSL https://ollama.com/install.sh | sh
sudo systemctl stop ollama
sudo systemctl disable ollama
sudo rm /etc/systemd/system/ollama.service
sudo cp ollama.service /etc/systemd/user/
systemctl --user daemon-reload
systemctl --user start ollama
systemctl --user status ollama
logs:
journalctl -fu ollama
restart:
systemctl --user restart ollama
systemctl --user status ollama
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.41
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4856/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4856/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8330
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8330/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8330/comments
|
https://api.github.com/repos/ollama/ollama/issues/8330/events
|
https://github.com/ollama/ollama/issues/8330
| 2,771,961,006
|
I_kwDOJ0Z1Ps6lOMSu
| 8,330
|
Using the Ollama 0.5.4 will cause the pull progress to decrease instead of increase.
|
{
"login": "leoho0722",
"id": 73574800,
"node_id": "MDQ6VXNlcjczNTc0ODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/73574800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leoho0722",
"html_url": "https://github.com/leoho0722",
"followers_url": "https://api.github.com/users/leoho0722/followers",
"following_url": "https://api.github.com/users/leoho0722/following{/other_user}",
"gists_url": "https://api.github.com/users/leoho0722/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leoho0722/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoho0722/subscriptions",
"organizations_url": "https://api.github.com/users/leoho0722/orgs",
"repos_url": "https://api.github.com/users/leoho0722/repos",
"events_url": "https://api.github.com/users/leoho0722/events{/privacy}",
"received_events_url": "https://api.github.com/users/leoho0722/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
open
| false
| null |
[] | null | 2
| 2025-01-07T05:54:08
| 2025-01-22T04:25:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi! I created a CPU Instance in HPC-AI.com to pull llama3.3:70b-instruct-fp16 and store it in Shared HighSpeedStorage for subsequent inference in the GPU Instance.
The Ollama version installed in the CPU Instance is 0.5.4, as shown below:

However, when I pulled the model in CPU Instance, I found that the pull progress decreased instead of increasing, as shown in the following two figures.


After discussions with HPC-AI.com's technical advisors, HPC-AI.com said there were no issues with their infrastructure, as shown below:




I tried to pull llama3.3:70b-instruct-fp16 locally. The pull progress was normal and continued to increase. There was no problem of decreasing the pull progress, as shown in the following two figures.


But the Ollama version installed locally is 0.3.14

I'm wondering if this is a known bug in Ollama 0.5.4?
Looking forward to your reply, thanks.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8330/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8330/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1882
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1882/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1882/comments
|
https://api.github.com/repos/ollama/ollama/issues/1882/events
|
https://github.com/ollama/ollama/issues/1882
| 2,073,457,678
|
I_kwDOJ0Z1Ps57lnQO
| 1,882
|
Embedding generation is slow
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6677485533,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgJX3Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/embeddings",
"name": "embeddings",
"color": "76BF9F",
"default": false,
"description": "Issues around embeddings"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-01-10T02:10:21
| 2024-06-25T04:33:07
| 2024-06-25T04:33:07
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When using `/api/embeddings`, large documents can take up to second
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1882/reactions",
"total_count": 13,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1882/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5235
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5235/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5235/comments
|
https://api.github.com/repos/ollama/ollama/issues/5235/events
|
https://github.com/ollama/ollama/issues/5235
| 2,368,193,239
|
I_kwDOJ0Z1Ps6NJ8LX
| 5,235
|
Claude 3.5 model
|
{
"login": "zhouhao27",
"id": 8099731,
"node_id": "MDQ6VXNlcjgwOTk3MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8099731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhouhao27",
"html_url": "https://github.com/zhouhao27",
"followers_url": "https://api.github.com/users/zhouhao27/followers",
"following_url": "https://api.github.com/users/zhouhao27/following{/other_user}",
"gists_url": "https://api.github.com/users/zhouhao27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhouhao27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhouhao27/subscriptions",
"organizations_url": "https://api.github.com/users/zhouhao27/orgs",
"repos_url": "https://api.github.com/users/zhouhao27/repos",
"events_url": "https://api.github.com/users/zhouhao27/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhouhao27/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-06-23T03:10:49
| 2024-12-18T08:07:40
| 2024-06-27T21:22:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is it possible to support loading the open source Claude 3.5 model? Thanks.
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5235/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5235/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4065
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4065/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4065/comments
|
https://api.github.com/repos/ollama/ollama/issues/4065/events
|
https://github.com/ollama/ollama/pull/4065
| 2,272,610,831
|
PR_kwDOJ0Z1Ps5uNCDY
| 4,065
|
types/model: reintroduce Digest
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-30T22:53:16
| 2024-04-30T23:38:03
| 2024-04-30T23:38:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4065",
"html_url": "https://github.com/ollama/ollama/pull/4065",
"diff_url": "https://github.com/ollama/ollama/pull/4065.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4065.patch",
"merged_at": "2024-04-30T23:38:03"
}
| null |
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4065/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2837
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2837/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2837/comments
|
https://api.github.com/repos/ollama/ollama/issues/2837/events
|
https://github.com/ollama/ollama/pull/2837
| 2,161,678,482
|
PR_kwDOJ0Z1Ps5oUR7h
| 2,837
|
Add env var so podman will map cuda GPUs
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-29T16:43:41
| 2024-02-29T23:47:40
| 2024-02-29T23:47:37
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2837",
"html_url": "https://github.com/ollama/ollama/pull/2837",
"diff_url": "https://github.com/ollama/ollama/pull/2837.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2837.patch",
"merged_at": "2024-02-29T23:47:37"
}
|
Without this env var, podman's GPU logic doesn't map the GPU through
Fixes #2716
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2837/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2837/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/353
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/353/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/353/comments
|
https://api.github.com/repos/ollama/ollama/issues/353/events
|
https://github.com/ollama/ollama/issues/353
| 1,851,805,528
|
I_kwDOJ0Z1Ps5uYE9Y
| 353
|
Non-interactive CLI to prompt a model
|
{
"login": "bhazzard",
"id": 312158,
"node_id": "MDQ6VXNlcjMxMjE1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/312158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhazzard",
"html_url": "https://github.com/bhazzard",
"followers_url": "https://api.github.com/users/bhazzard/followers",
"following_url": "https://api.github.com/users/bhazzard/following{/other_user}",
"gists_url": "https://api.github.com/users/bhazzard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhazzard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhazzard/subscriptions",
"organizations_url": "https://api.github.com/users/bhazzard/orgs",
"repos_url": "https://api.github.com/users/bhazzard/repos",
"events_url": "https://api.github.com/users/bhazzard/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhazzard/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-08-15T17:01:24
| 2024-11-07T12:21:31
| 2023-08-22T01:07:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Having a non-interactive CLI command would allow using ollama programmatically in other bash scripts without the need for a long running process to keep the server up for the api.
Something like...
`ollama prompt "<prompt>" -m <model>`
OR
```
ollama use <model>
ollama prompt
```
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/353/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/353/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/569
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/569/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/569/comments
|
https://api.github.com/repos/ollama/ollama/issues/569/events
|
https://github.com/ollama/ollama/pull/569
| 1,907,820,085
|
PR_kwDOJ0Z1Ps5a70U6
| 569
|
silence warm up log
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-21T21:54:44
| 2023-09-21T23:52:44
| 2023-09-21T23:52:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/569",
"html_url": "https://github.com/ollama/ollama/pull/569",
"diff_url": "https://github.com/ollama/ollama/pull/569.diff",
"patch_url": "https://github.com/ollama/ollama/pull/569.patch",
"merged_at": "2023-09-21T23:52:43"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/569/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4049
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4049/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4049/comments
|
https://api.github.com/repos/ollama/ollama/issues/4049/events
|
https://github.com/ollama/ollama/issues/4049
| 2,271,314,675
|
I_kwDOJ0Z1Ps6HYYLz
| 4,049
|
support for CodeQwen1.5-7B-Chat
|
{
"login": "rburgst",
"id": 22971,
"node_id": "MDQ6VXNlcjIyOTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/22971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rburgst",
"html_url": "https://github.com/rburgst",
"followers_url": "https://api.github.com/users/rburgst/followers",
"following_url": "https://api.github.com/users/rburgst/following{/other_user}",
"gists_url": "https://api.github.com/users/rburgst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rburgst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rburgst/subscriptions",
"organizations_url": "https://api.github.com/users/rburgst/orgs",
"repos_url": "https://api.github.com/users/rburgst/repos",
"events_url": "https://api.github.com/users/rburgst/events{/privacy}",
"received_events_url": "https://api.github.com/users/rburgst/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-30T12:18:23
| 2024-04-30T17:17:30
| 2024-04-30T17:17:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat
AFAICS this model is currently not available on ollama, its supposed to be quite good for coding: https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard.
This model appears to be not a coding optimized variant https://ollama.com/library/qwen
|
{
"login": "rburgst",
"id": 22971,
"node_id": "MDQ6VXNlcjIyOTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/22971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rburgst",
"html_url": "https://github.com/rburgst",
"followers_url": "https://api.github.com/users/rburgst/followers",
"following_url": "https://api.github.com/users/rburgst/following{/other_user}",
"gists_url": "https://api.github.com/users/rburgst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rburgst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rburgst/subscriptions",
"organizations_url": "https://api.github.com/users/rburgst/orgs",
"repos_url": "https://api.github.com/users/rburgst/repos",
"events_url": "https://api.github.com/users/rburgst/events{/privacy}",
"received_events_url": "https://api.github.com/users/rburgst/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4049/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8009
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8009/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8009/comments
|
https://api.github.com/repos/ollama/ollama/issues/8009/events
|
https://github.com/ollama/ollama/issues/8009
| 2,726,456,803
|
I_kwDOJ0Z1Ps6igm3j
| 8,009
|
make is somehow visible that the context size in models is not used by default
|
{
"login": "fce2",
"id": 16529960,
"node_id": "MDQ6VXNlcjE2NTI5OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16529960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fce2",
"html_url": "https://github.com/fce2",
"followers_url": "https://api.github.com/users/fce2/followers",
"following_url": "https://api.github.com/users/fce2/following{/other_user}",
"gists_url": "https://api.github.com/users/fce2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fce2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fce2/subscriptions",
"organizations_url": "https://api.github.com/users/fce2/orgs",
"repos_url": "https://api.github.com/users/fce2/repos",
"events_url": "https://api.github.com/users/fce2/events{/privacy}",
"received_events_url": "https://api.github.com/users/fce2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-09T09:18:17
| 2024-12-14T16:34:38
| 2024-12-14T16:34:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This is extremely important, it drove me kind of crazy ("it woked before, what did i do wrong now" ??):
!!! You cant use the native models context size until you activate it manually !!!
manual procedure:
ollama run llama3.2:3b-instruct-q8_0
> /set parameter num_ctx 65536
> /save llama3.2:3b-instruct-q8_0-65536
> /bye
Thanks for this hint, microsuxxor !!
Its also important to know: the model size increases extremely, a llama3.2-3b model cant run on a 4090 anymore:
llama3.2:3b-instruct-q8_0 5.4GB 150tps
llama3.2:3b-instruct-q8_0-4096 6.8GB 146tps
llama3.2:3b-instruct-q8_0-8192 9.6GB 146tps
llama3.2:3b-instruct-q8_0-16384 15.0GB 146tps
llama3.2:3b-instruct-q8_0-32768 9.6GB 150tps
llama3.2:3b-instruct-q8_0-65536 15.0GB 150tps
llama3.2:3b-instruct-q8_0-131072 26.0GB 30tps
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8009/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4630
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4630/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4630/comments
|
https://api.github.com/repos/ollama/ollama/issues/4630/events
|
https://github.com/ollama/ollama/issues/4630
| 2,316,811,641
|
I_kwDOJ0Z1Ps6KF715
| 4,630
|
Runner process terminated
|
{
"login": "GouthamGuna",
"id": 88366848,
"node_id": "MDQ6VXNlcjg4MzY2ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/88366848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GouthamGuna",
"html_url": "https://github.com/GouthamGuna",
"followers_url": "https://api.github.com/users/GouthamGuna/followers",
"following_url": "https://api.github.com/users/GouthamGuna/following{/other_user}",
"gists_url": "https://api.github.com/users/GouthamGuna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GouthamGuna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GouthamGuna/subscriptions",
"organizations_url": "https://api.github.com/users/GouthamGuna/orgs",
"repos_url": "https://api.github.com/users/GouthamGuna/repos",
"events_url": "https://api.github.com/users/GouthamGuna/events{/privacy}",
"received_events_url": "https://api.github.com/users/GouthamGuna/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-05-25T07:53:02
| 2024-07-26T04:41:58
| 2024-07-25T23:37:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
While executing the `ollama run <model>` command,
Regarding `Error: llama runner process has terminated: exit status 0xc0000005`, I received a response.
I am not aware of the issue.
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.38
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4630/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6033
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6033/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6033/comments
|
https://api.github.com/repos/ollama/ollama/issues/6033/events
|
https://github.com/ollama/ollama/pull/6033
| 2,434,139,499
|
PR_kwDOJ0Z1Ps52rXcy
| 6,033
|
Enhance windows ROCm gfx compatibility
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-28T22:22:54
| 2024-08-08T20:40:02
| 2024-08-08T20:40:02
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6033",
"html_url": "https://github.com/ollama/ollama/pull/6033",
"diff_url": "https://github.com/ollama/ollama/pull/6033.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6033.patch",
"merged_at": null
}
|
The payload files in rocblas/library do not appear to be OS specific.
Spot testing on a gfx1101 system shows this appears viable. I'll keep this draft until I can verify it actually enables previously unsupported gfx targets and they work properly at runtime.
Use the linux rocblas tensile library data files which have broader gfx support than the HIP windows release.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6033/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6033/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/740
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/740/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/740/comments
|
https://api.github.com/repos/ollama/ollama/issues/740/events
|
https://github.com/ollama/ollama/issues/740
| 1,932,806,056
|
I_kwDOJ0Z1Ps5zNEeo
| 740
|
Using ollama-ui from remote client
|
{
"login": "suoko",
"id": 3659980,
"node_id": "MDQ6VXNlcjM2NTk5ODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3659980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suoko",
"html_url": "https://github.com/suoko",
"followers_url": "https://api.github.com/users/suoko/followers",
"following_url": "https://api.github.com/users/suoko/following{/other_user}",
"gists_url": "https://api.github.com/users/suoko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suoko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suoko/subscriptions",
"organizations_url": "https://api.github.com/users/suoko/orgs",
"repos_url": "https://api.github.com/users/suoko/repos",
"events_url": "https://api.github.com/users/suoko/events{/privacy}",
"received_events_url": "https://api.github.com/users/suoko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-09T10:57:00
| 2023-10-11T16:00:57
| 2023-10-11T16:00:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have a server with ollama which works ok.
If I install ollama-ui or use the chrome extension (https://github.com/ollama-ui/ollama-ui) I can't reach the server from a remote client.
Let's say I have the server on 192.168.0.1 and the client on 192.168.0.2, how should I run the server ?
I tried both the OLLAMA_ORIGINS and OLLAMA_HOST options with no luck
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/740/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2884
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2884/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2884/comments
|
https://api.github.com/repos/ollama/ollama/issues/2884/events
|
https://github.com/ollama/ollama/issues/2884
| 2,165,028,848
|
I_kwDOJ0Z1Ps6BC7fw
| 2,884
|
127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.
|
{
"login": "tommcg",
"id": 1521555,
"node_id": "MDQ6VXNlcjE1MjE1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1521555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tommcg",
"html_url": "https://github.com/tommcg",
"followers_url": "https://api.github.com/users/tommcg/followers",
"following_url": "https://api.github.com/users/tommcg/following{/other_user}",
"gists_url": "https://api.github.com/users/tommcg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tommcg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tommcg/subscriptions",
"organizations_url": "https://api.github.com/users/tommcg/orgs",
"repos_url": "https://api.github.com/users/tommcg/repos",
"events_url": "https://api.github.com/users/tommcg/events{/privacy}",
"received_events_url": "https://api.github.com/users/tommcg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 16
| 2024-03-02T23:39:15
| 2024-06-01T21:02:33
| 2024-06-01T21:02:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm receiving the error on Windows 10. I've closed all of the command prompts and quit the Ollama app via the icon in the systray.
Error: Post "http://127.0.0.1:11434/api/chat": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.
I can keep running them over and over, and the icons in the systray will start to pile up.
Can someone help me with finding the process that will kill the localhost connection so I can restart Ollama?
Thanks.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2884/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5578
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5578/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5578/comments
|
https://api.github.com/repos/ollama/ollama/issues/5578/events
|
https://github.com/ollama/ollama/pull/5578
| 2,398,846,855
|
PR_kwDOJ0Z1Ps503jYM
| 5,578
|
OpenAI Tests: Separate Request and Response Compare Tests
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-09T18:12:41
| 2024-07-09T20:48:33
| 2024-07-09T20:48:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5578",
"html_url": "https://github.com/ollama/ollama/pull/5578",
"diff_url": "https://github.com/ollama/ollama/pull/5578.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5578.patch",
"merged_at": "2024-07-09T20:48:31"
}
| null |
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5578/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3223
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3223/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3223/comments
|
https://api.github.com/repos/ollama/ollama/issues/3223/events
|
https://github.com/ollama/ollama/issues/3223
| 2,192,205,018
|
I_kwDOJ0Z1Ps6CqmTa
| 3,223
|
Go client marshals ChatRequest.KeepAlive into wrong data type
|
{
"login": "jankammerath",
"id": 9638085,
"node_id": "MDQ6VXNlcjk2MzgwODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9638085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jankammerath",
"html_url": "https://github.com/jankammerath",
"followers_url": "https://api.github.com/users/jankammerath/followers",
"following_url": "https://api.github.com/users/jankammerath/following{/other_user}",
"gists_url": "https://api.github.com/users/jankammerath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jankammerath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jankammerath/subscriptions",
"organizations_url": "https://api.github.com/users/jankammerath/orgs",
"repos_url": "https://api.github.com/users/jankammerath/repos",
"events_url": "https://api.github.com/users/jankammerath/events{/privacy}",
"received_events_url": "https://api.github.com/users/jankammerath/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-18T13:27:32
| 2024-03-19T09:22:49
| 2024-03-19T09:22:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As described by @jmorganca in https://github.com/ollama/ollama/issues/2343, the `keep_alive` field can be `0`, `-1` or another value indicating how long the model shall stay in memory.
The following Go code defines the `keep_alive` field as another struct with a Duration in it.
https://github.com/ollama/ollama/blob/7ed3e941058a47464a1ee97cd16f464eb788e396/api/types.go#L45
The current implementation results in the `keep_alive` being marshaled into `{"Duration":86400000000000}`
Here's a Playground example on how the `ChatRequest` struct is marshaled into JSON in the current implementation.
```
// You can edit this code!
// Click here and start typing.
package main
import (
"encoding/json"
"fmt"
"time"
)
type Duration struct {
time.Duration
}
type ChatRequest struct {
Model string `json:"model"`
KeepAlive *Duration `json:"keep_alive,omitempty"`
Options map[string]interface{} `json:"options"`
}
func main() {
request := ChatRequest{
Model: "model-name",
// keep alive for 24 hours
KeepAlive: &Duration{Duration: time.Hour * 24},
}
jsonData, err := json.Marshal(request)
if err != nil {
fmt.Println(err)
}
fmt.Println(string(jsonData))
}
```
The output of this is as follows.
`{"model":"model-name","keep_alive":{"Duration":86400000000000},"options":null}`
A possible solution could be to change the `keep_alive` field to an `int64`.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3223/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/1500
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1500/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1500/comments
|
https://api.github.com/repos/ollama/ollama/issues/1500/events
|
https://github.com/ollama/ollama/issues/1500
| 2,039,549,757
|
I_kwDOJ0Z1Ps55kQ89
| 1,500
|
GPU MIG not supported in Kubernetes
|
{
"login": "duhow",
"id": 1145001,
"node_id": "MDQ6VXNlcjExNDUwMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1145001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duhow",
"html_url": "https://github.com/duhow",
"followers_url": "https://api.github.com/users/duhow/followers",
"following_url": "https://api.github.com/users/duhow/following{/other_user}",
"gists_url": "https://api.github.com/users/duhow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duhow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duhow/subscriptions",
"organizations_url": "https://api.github.com/users/duhow/orgs",
"repos_url": "https://api.github.com/users/duhow/repos",
"events_url": "https://api.github.com/users/duhow/events{/privacy}",
"received_events_url": "https://api.github.com/users/duhow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 17
| 2023-12-13T11:51:05
| 2024-08-05T09:54:49
| 2024-05-25T15:36:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/jmorganca/ollama/blob/7db5bcf73bf7026970e988f56126db8f370f1b11/llm/llama.go#L238
Getting the GPU information (full-GPU memory) is not available - the command above returns `Insufficient Permissions`, as the container is assigned only a part of it via MIG (Multi-Instance GPU).
However, the container can actually view the MIG devices, and `ollama` should be able to use them.
```
root@ollama-0:/# nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.108.03 Driver Version: 510.108.03 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100 80G... Off | 00000000:05:00.0 Off | On |
| N/A 35C P0 43W / 300W | N/A | N/A Default |
| | | Enabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| MIG devices: |
+------------------+----------------------+-----------+-----------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|
| | | ECC| |
|==================+======================+===========+=======================|
| 0 7 0 0 | 6MiB / 9728MiB | 14 0 | 1 0 0 0 0 |
| | 0MiB / 16383MiB | | |
+------------------+----------------------+-----------+-----------------------+
| 0 8 0 1 | 6MiB / 9728MiB | 14 0 | 1 0 0 0 0 |
| | 0MiB / 16383MiB | | |
+------------------+----------------------+-----------+-----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1500/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1500/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6501
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6501/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6501/comments
|
https://api.github.com/repos/ollama/ollama/issues/6501/events
|
https://github.com/ollama/ollama/issues/6501
| 2,485,238,132
|
I_kwDOJ0Z1Ps6UIbl0
| 6,501
|
llama_get_logits_ith: invalid logits id 11, reason: no logits
|
{
"login": "zhaowanjin",
"id": 50563601,
"node_id": "MDQ6VXNlcjUwNTYzNjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/50563601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaowanjin",
"html_url": "https://github.com/zhaowanjin",
"followers_url": "https://api.github.com/users/zhaowanjin/followers",
"following_url": "https://api.github.com/users/zhaowanjin/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaowanjin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaowanjin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaowanjin/subscriptions",
"organizations_url": "https://api.github.com/users/zhaowanjin/orgs",
"repos_url": "https://api.github.com/users/zhaowanjin/repos",
"events_url": "https://api.github.com/users/zhaowanjin/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaowanjin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-08-25T13:04:23
| 2024-08-25T15:36:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
llama_get_logits_ith: invalid logits id 11, reason: no logits
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6501/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2986
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2986/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2986/comments
|
https://api.github.com/repos/ollama/ollama/issues/2986/events
|
https://github.com/ollama/ollama/pull/2986
| 2,174,551,592
|
PR_kwDOJ0Z1Ps5pAMi6
| 2,986
|
Wire up more complete CI for releases
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-07T18:54:46
| 2024-03-07T19:25:35
| 2024-03-07T19:25:19
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2986",
"html_url": "https://github.com/ollama/ollama/pull/2986",
"diff_url": "https://github.com/ollama/ollama/pull/2986.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2986.patch",
"merged_at": null
}
|
Not ready for review yet...
Flesh out our github actions CI so we can build official releaes.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2986/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3594
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3594/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3594/comments
|
https://api.github.com/repos/ollama/ollama/issues/3594/events
|
https://github.com/ollama/ollama/issues/3594
| 2,237,583,067
|
I_kwDOJ0Z1Ps6FXs7b
| 3,594
|
error - dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
|
{
"login": "Ajay-Engineer",
"id": 92223770,
"node_id": "U_kgDOBX85Gg",
"avatar_url": "https://avatars.githubusercontent.com/u/92223770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ajay-Engineer",
"html_url": "https://github.com/Ajay-Engineer",
"followers_url": "https://api.github.com/users/Ajay-Engineer/followers",
"following_url": "https://api.github.com/users/Ajay-Engineer/following{/other_user}",
"gists_url": "https://api.github.com/users/Ajay-Engineer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ajay-Engineer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ajay-Engineer/subscriptions",
"organizations_url": "https://api.github.com/users/Ajay-Engineer/orgs",
"repos_url": "https://api.github.com/users/Ajay-Engineer/repos",
"events_url": "https://api.github.com/users/Ajay-Engineer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ajay-Engineer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-04-11T12:11:48
| 2024-06-19T16:51:22
| 2024-04-15T19:16:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi team im getting this error below -
C:\Windows\System32>ollama run gemma
pulling manifest
Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/ef/ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240411%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240411T115747Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=1f76e44610af72e8b0d79dab1389991616ef08be32f9747fe7eb1feae6711eff": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
I tried a lot many ways to download this please help me to fix this error.
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.31
### GPU
Nvidia
### GPU info
_No response_
### CPU
AMD
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3594/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/241
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/241/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/241/comments
|
https://api.github.com/repos/ollama/ollama/issues/241/events
|
https://github.com/ollama/ollama/issues/241
| 1,827,906,717
|
I_kwDOJ0Z1Ps5s86Sd
| 241
|
Error : Post /api/generate :EOF
|
{
"login": "ajasingh",
"id": 15189049,
"node_id": "MDQ6VXNlcjE1MTg5MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/15189049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajasingh",
"html_url": "https://github.com/ajasingh",
"followers_url": "https://api.github.com/users/ajasingh/followers",
"following_url": "https://api.github.com/users/ajasingh/following{/other_user}",
"gists_url": "https://api.github.com/users/ajasingh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajasingh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajasingh/subscriptions",
"organizations_url": "https://api.github.com/users/ajasingh/orgs",
"repos_url": "https://api.github.com/users/ajasingh/repos",
"events_url": "https://api.github.com/users/ajasingh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajasingh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2023-07-30T11:50:14
| 2023-08-10T19:28:59
| 2023-08-10T19:28:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Im not able to run llama2 it gives me below error when i run the ollama run llama2
Error: Post "http://127.0.0.1/api/generate" : EOF
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/241/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/241/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3387
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3387/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3387/comments
|
https://api.github.com/repos/ollama/ollama/issues/3387/events
|
https://github.com/ollama/ollama/issues/3387
| 2,213,272,750
|
I_kwDOJ0Z1Ps6D69yu
| 3,387
|
Multiline Input Buffer to small
|
{
"login": "FairyTail2000",
"id": 22645621,
"node_id": "MDQ6VXNlcjIyNjQ1NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22645621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FairyTail2000",
"html_url": "https://github.com/FairyTail2000",
"followers_url": "https://api.github.com/users/FairyTail2000/followers",
"following_url": "https://api.github.com/users/FairyTail2000/following{/other_user}",
"gists_url": "https://api.github.com/users/FairyTail2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FairyTail2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FairyTail2000/subscriptions",
"organizations_url": "https://api.github.com/users/FairyTail2000/orgs",
"repos_url": "https://api.github.com/users/FairyTail2000/repos",
"events_url": "https://api.github.com/users/FairyTail2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/FairyTail2000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6960960225,
"node_id": "LA_kwDOJ0Z1Ps8AAAABnufS4Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/cli",
"name": "cli",
"color": "5319e7",
"default": false,
"description": "Issues related to the Ollama CLI"
}
] |
open
| false
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-03-28T13:39:13
| 2024-05-18T03:43:49
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When testing out model capabilities I copy pasted the whole rendered version of the [ollama readme](https://github.com/ollama/ollama/blob/756c2575535641f1b96d94b4214941b90f4c30c7/README.md) into the terminal to be consumed the llm.
While ollama crashed due to insufficient VRAM, this is not the issue in this issue. It took everything up to:
> .. Remove a model
... ollama rm llama2
... Copy a model
... ollama cp llama2 my-llama2
... Multiline input
... For multiline input, you can wrap text with """:
...
... >>> """Hello,
... ... world!
... ... """
Error: Post "http://0.0.0.0:11434/api/chat": EOF
~ $ d '{
quote> "model": "mistral",
quote> "messages": [
quote> { "role": "user", "content": "why is the sky blue?" }
quote> ]
quote> }'
zsh: command not found: d
After that the remaining input overflowed into the standard input:

### What did you expect to see?
Ollama taking the whole input
### Steps to reproduce
Copy the rendred version of the readme linked above, paste it and see it overflowing
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
x86
### Platform
_No response_
### Ollama version
0.1.29
### GPU
Nvidia
### GPU info
_No response_
### CPU
Intel
### Other software
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3387/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3833
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3833/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3833/comments
|
https://api.github.com/repos/ollama/ollama/issues/3833/events
|
https://github.com/ollama/ollama/pull/3833
| 2,257,597,882
|
PR_kwDOJ0Z1Ps5taGk1
| 3,833
|
fix: from blob
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-22T22:57:28
| 2024-04-24T22:13:48
| 2024-04-24T22:13:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3833",
"html_url": "https://github.com/ollama/ollama/pull/3833",
"diff_url": "https://github.com/ollama/ollama/pull/3833.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3833.patch",
"merged_at": "2024-04-24T22:13:47"
}
|
from calls replace all which may affect other parts of the model file. instead, replace just the line.
resolves #3685
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3833/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3674
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3674/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3674/comments
|
https://api.github.com/repos/ollama/ollama/issues/3674/events
|
https://github.com/ollama/ollama/issues/3674
| 2,246,472,198
|
I_kwDOJ0Z1Ps6F5nIG
| 3,674
|
can we have a flatpak or snap app
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5895046125,
"node_id": "LA_kwDOJ0Z1Ps8AAAABX19D7Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/integration",
"name": "integration",
"color": "92E43A",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-16T16:34:57
| 2025-01-15T07:43:21
| 2025-01-15T07:43:21
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I think having a flatpak or snap app makes more sense i know snap is also good for commend line application.
### How should we solve this?
https://docs.flatpak.org/en/latest/first-build.html
https://discourse.flathub.org/
https://snapcraft.io/store
https://forum.snapcraft.io/
### What is the impact of not solving this?
There are more than millions and millions of users using immutable distros and steam deck steam os and chrome os chromebook.
### Anything else?
_No response_
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3674/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3674/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2965
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2965/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2965/comments
|
https://api.github.com/repos/ollama/ollama/issues/2965/events
|
https://github.com/ollama/ollama/issues/2965
| 2,172,741,202
|
I_kwDOJ0Z1Ps6BgWZS
| 2,965
|
Is it possible to support more embedding models in the future?
|
{
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/followers",
"following_url": "https://api.github.com/users/wwjCMP/following{/other_user}",
"gists_url": "https://api.github.com/users/wwjCMP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwjCMP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwjCMP/subscriptions",
"organizations_url": "https://api.github.com/users/wwjCMP/orgs",
"repos_url": "https://api.github.com/users/wwjCMP/repos",
"events_url": "https://api.github.com/users/wwjCMP/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwjCMP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-03-07T01:09:31
| 2024-03-08T04:59:45
| 2024-03-08T04:59:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The nomic-embed-text is great, but it does not support Chinese.
|
{
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/followers",
"following_url": "https://api.github.com/users/wwjCMP/following{/other_user}",
"gists_url": "https://api.github.com/users/wwjCMP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwjCMP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwjCMP/subscriptions",
"organizations_url": "https://api.github.com/users/wwjCMP/orgs",
"repos_url": "https://api.github.com/users/wwjCMP/repos",
"events_url": "https://api.github.com/users/wwjCMP/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwjCMP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2965/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3356
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3356/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3356/comments
|
https://api.github.com/repos/ollama/ollama/issues/3356/events
|
https://github.com/ollama/ollama/pull/3356
| 2,207,186,254
|
PR_kwDOJ0Z1Ps5qu_Dw
| 3,356
|
Switch runner for final release job
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-26T03:52:00
| 2024-03-26T03:54:50
| 2024-03-26T03:54:46
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3356",
"html_url": "https://github.com/ollama/ollama/pull/3356",
"diff_url": "https://github.com/ollama/ollama/pull/3356.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3356.patch",
"merged_at": "2024-03-26T03:54:46"
}
|
The manifest and tagging step use a lot of disk space
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3356/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6075
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6075/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6075/comments
|
https://api.github.com/repos/ollama/ollama/issues/6075/events
|
https://github.com/ollama/ollama/issues/6075
| 2,437,988,526
|
I_kwDOJ0Z1Ps6RUMCu
| 6,075
|
`llama3.1:8b` gives empty response first, then hallucinates when prompted with `system` role
|
{
"login": "rb81",
"id": 48117105,
"node_id": "MDQ6VXNlcjQ4MTE3MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/48117105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rb81",
"html_url": "https://github.com/rb81",
"followers_url": "https://api.github.com/users/rb81/followers",
"following_url": "https://api.github.com/users/rb81/following{/other_user}",
"gists_url": "https://api.github.com/users/rb81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rb81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rb81/subscriptions",
"organizations_url": "https://api.github.com/users/rb81/orgs",
"repos_url": "https://api.github.com/users/rb81/repos",
"events_url": "https://api.github.com/users/rb81/events{/privacy}",
"received_events_url": "https://api.github.com/users/rb81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-30T14:29:17
| 2024-07-30T14:51:33
| 2024-07-30T14:51:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've suddenly started getting weird behavior from `llama3.1:8b` when prompting it with the `system` role. `user` works as expected. `system` was working a few days ago, so I don't know if this is related to an issue with a recent model or Ollama update. Anybody else facing similar issues?
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.3.0
|
{
"login": "rb81",
"id": 48117105,
"node_id": "MDQ6VXNlcjQ4MTE3MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/48117105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rb81",
"html_url": "https://github.com/rb81",
"followers_url": "https://api.github.com/users/rb81/followers",
"following_url": "https://api.github.com/users/rb81/following{/other_user}",
"gists_url": "https://api.github.com/users/rb81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rb81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rb81/subscriptions",
"organizations_url": "https://api.github.com/users/rb81/orgs",
"repos_url": "https://api.github.com/users/rb81/repos",
"events_url": "https://api.github.com/users/rb81/events{/privacy}",
"received_events_url": "https://api.github.com/users/rb81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6075/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6097
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6097/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6097/comments
|
https://api.github.com/repos/ollama/ollama/issues/6097/events
|
https://github.com/ollama/ollama/issues/6097
| 2,440,063,879
|
I_kwDOJ0Z1Ps6RcGuH
| 6,097
|
ollama bad response
|
{
"login": "elifbykrbc",
"id": 119016055,
"node_id": "U_kgDOBxgKdw",
"avatar_url": "https://avatars.githubusercontent.com/u/119016055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elifbykrbc",
"html_url": "https://github.com/elifbykrbc",
"followers_url": "https://api.github.com/users/elifbykrbc/followers",
"following_url": "https://api.github.com/users/elifbykrbc/following{/other_user}",
"gists_url": "https://api.github.com/users/elifbykrbc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elifbykrbc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elifbykrbc/subscriptions",
"organizations_url": "https://api.github.com/users/elifbykrbc/orgs",
"repos_url": "https://api.github.com/users/elifbykrbc/repos",
"events_url": "https://api.github.com/users/elifbykrbc/events{/privacy}",
"received_events_url": "https://api.github.com/users/elifbykrbc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-07-31T13:27:11
| 2024-09-02T23:36:06
| 2024-09-02T23:36:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
hi, i'm working on a llm project that uses phi3-vision. lately pushed into ollama but very bad responses i'm getting than i get from colab notebook. on colab phi3-vision can recognize well images but when run ollama and ask same question with same image its hallucinating too much. i don't know what would caused the hallucination and bad responses, pls help me get through this prob
### OS
Windows
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.2.8
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6097/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/375
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/375/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/375/comments
|
https://api.github.com/repos/ollama/ollama/issues/375/events
|
https://github.com/ollama/ollama/pull/375
| 1,855,775,128
|
PR_kwDOJ0Z1Ps5YMwWJ
| 375
|
fix push manifest
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-17T22:16:13
| 2023-08-17T22:33:32
| 2023-08-17T22:33:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/375",
"html_url": "https://github.com/ollama/ollama/pull/375",
"diff_url": "https://github.com/ollama/ollama/pull/375.diff",
"patch_url": "https://github.com/ollama/ollama/pull/375.patch",
"merged_at": "2023-08-17T22:33:31"
}
|
When all blobs already exist in the registry, only the manifest is pushed. This request requires authn so must follow the same token exchange flow
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/375/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6600
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6600/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6600/comments
|
https://api.github.com/repos/ollama/ollama/issues/6600/events
|
https://github.com/ollama/ollama/issues/6600
| 2,501,949,497
|
I_kwDOJ0Z1Ps6VILg5
| 6,600
|
In ollama, do these llama3.1 models refer to the pretrained basic models or instruction tuned models?
|
{
"login": "icecream-and-tea",
"id": 89716710,
"node_id": "MDQ6VXNlcjg5NzE2NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/89716710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/icecream-and-tea",
"html_url": "https://github.com/icecream-and-tea",
"followers_url": "https://api.github.com/users/icecream-and-tea/followers",
"following_url": "https://api.github.com/users/icecream-and-tea/following{/other_user}",
"gists_url": "https://api.github.com/users/icecream-and-tea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/icecream-and-tea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/icecream-and-tea/subscriptions",
"organizations_url": "https://api.github.com/users/icecream-and-tea/orgs",
"repos_url": "https://api.github.com/users/icecream-and-tea/repos",
"events_url": "https://api.github.com/users/icecream-and-tea/events{/privacy}",
"received_events_url": "https://api.github.com/users/icecream-and-tea/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-03T05:28:01
| 2024-09-03T23:16:25
| 2024-09-03T23:16:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
the model id is llama3.1:8b that it seems to refer to the pretrained basic model, but in the ollama model library, the performance of llama3.1:8b is consistent with the finetuned model
llama3.1 huggingface performance

the ollama library introduction of llama3.1

### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6600/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1200
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1200/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1200/comments
|
https://api.github.com/repos/ollama/ollama/issues/1200/events
|
https://github.com/ollama/ollama/issues/1200
| 2,001,006,591
|
I_kwDOJ0Z1Ps53RO__
| 1,200
|
CPU instead of GPU for Q5_1 models
|
{
"login": "ivanfioravanti",
"id": 1069210,
"node_id": "MDQ6VXNlcjEwNjkyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1069210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivanfioravanti",
"html_url": "https://github.com/ivanfioravanti",
"followers_url": "https://api.github.com/users/ivanfioravanti/followers",
"following_url": "https://api.github.com/users/ivanfioravanti/following{/other_user}",
"gists_url": "https://api.github.com/users/ivanfioravanti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivanfioravanti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivanfioravanti/subscriptions",
"organizations_url": "https://api.github.com/users/ivanfioravanti/orgs",
"repos_url": "https://api.github.com/users/ivanfioravanti/repos",
"events_url": "https://api.github.com/users/ivanfioravanti/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivanfioravanti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2023-11-19T19:40:12
| 2023-11-20T22:04:32
| 2023-11-20T21:56:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Using alfred q5_1 model uses CPU instead of GPU, latest is using GPU properly
Tested on Apple M2 Ultra (cores: 8E+16P+76GPU) 192GB RAM using asitop
Here commands and attached logs:
`ollama run alfred:40b-1023-q5_1 "give me a list of document with {city:city name, country:country name} at least 3 use json format"`
`ollama run latest "give me a list of document with {city:city name, country:country name} at least 3 use json format"`
[serverCPU.log](https://github.com/jmorganca/ollama/files/13404438/serverCPU.log)
[serverGPU.log](https://github.com/jmorganca/ollama/files/13404439/serverGPU.log)
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1200/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6700
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6700/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6700/comments
|
https://api.github.com/repos/ollama/ollama/issues/6700/events
|
https://github.com/ollama/ollama/issues/6700
| 2,512,250,328
|
I_kwDOJ0Z1Ps6VveXY
| 6,700
|
MiniCPM3 not supported
|
{
"login": "sataliulan",
"id": 6769310,
"node_id": "MDQ6VXNlcjY3NjkzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6769310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sataliulan",
"html_url": "https://github.com/sataliulan",
"followers_url": "https://api.github.com/users/sataliulan/followers",
"following_url": "https://api.github.com/users/sataliulan/following{/other_user}",
"gists_url": "https://api.github.com/users/sataliulan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sataliulan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sataliulan/subscriptions",
"organizations_url": "https://api.github.com/users/sataliulan/orgs",
"repos_url": "https://api.github.com/users/sataliulan/repos",
"events_url": "https://api.github.com/users/sataliulan/events{/privacy}",
"received_events_url": "https://api.github.com/users/sataliulan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-09-08T07:51:17
| 2024-10-15T11:02:49
| 2024-09-09T00:40:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
hi there,
I tried to run MiniCPM3 with ollama. I converted the model's bin file to gguf ,and quantized it to Q4_K_M successfully.
Then , I created a modelfile and ran the model with cmd:
ollama run MiniCPM3Q4 "hi ,你好吗"
And I got an error, as shown in the picture,

hmm, it seems my ollama (version:0.3.9) is not supported for the model
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.9
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6700/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3832
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3832/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3832/comments
|
https://api.github.com/repos/ollama/ollama/issues/3832/events
|
https://github.com/ollama/ollama/issues/3832
| 2,257,564,202
|
I_kwDOJ0Z1Ps6Gj7Iq
| 3,832
|
ilsp/Meltemi-7B-Instruct-v1-GGUF request
|
{
"login": "myrulezzz",
"id": 43094013,
"node_id": "MDQ6VXNlcjQzMDk0MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/43094013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myrulezzz",
"html_url": "https://github.com/myrulezzz",
"followers_url": "https://api.github.com/users/myrulezzz/followers",
"following_url": "https://api.github.com/users/myrulezzz/following{/other_user}",
"gists_url": "https://api.github.com/users/myrulezzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/myrulezzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/myrulezzz/subscriptions",
"organizations_url": "https://api.github.com/users/myrulezzz/orgs",
"repos_url": "https://api.github.com/users/myrulezzz/repos",
"events_url": "https://api.github.com/users/myrulezzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/myrulezzz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 5
| 2024-04-22T22:25:39
| 2024-04-23T15:13:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi ollama team. Is it possible to ilsp/Meltemi-7B-Instruct-v1-GGUF model in the repository. Thanks in advance
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3832/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/21
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/21/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/21/comments
|
https://api.github.com/repos/ollama/ollama/issues/21/events
|
https://github.com/ollama/ollama/issues/21
| 1,781,316,332
|
I_kwDOJ0Z1Ps5qLLrs
| 21
|
ollama run without model error should be caught
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2023-06-29T18:46:06
| 2023-06-29T19:53:41
| 2023-06-29T19:53:41
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
PS C:\Users\mail> ollama run
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\mail\AppData\Local\Programs\Python\Python311\Scripts\ollama.exe\__main__.py", line 4, in <module>
File "C:\Users\mail\AppData\Local\Programs\Python\Python311\Lib\site-packages\ollama\cmd\cli.py", line 8, in <module>
from ollama.cmd import server
File "C:\Users\mail\AppData\Local\Programs\Python\Python311\Lib\site-packages\ollama\cmd\server.py", line 1, in <module>
from aiohttp import web
ModuleNotFoundError: No module named 'aiohttp'
PS C:\Users\mail>
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/21/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/21/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1856
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1856/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1856/comments
|
https://api.github.com/repos/ollama/ollama/issues/1856/events
|
https://github.com/ollama/ollama/pull/1856
| 2,070,892,665
|
PR_kwDOJ0Z1Ps5jf-P1
| 1,856
|
remove ggml automatic re-pull
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-08T17:09:55
| 2024-01-08T19:41:02
| 2024-01-08T19:41:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1856",
"html_url": "https://github.com/ollama/ollama/pull/1856",
"diff_url": "https://github.com/ollama/ollama/pull/1856.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1856.patch",
"merged_at": "2024-01-08T19:41:01"
}
|
Remove ggml automatic re-pull now that ggml removal has been rolled out. This prevents a possible future bug where non-ggml models always get pulled on run.
When an unsupported model format is run the error message is displayed to the user:
```
$ ollama run orca-mini
Error: unsupported model format: this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull orca-mini:latest`
$ ollama create mario -f ~/models/mario/Modelfile
transferring model data
reading model metadata
Error: orca-mini is not in gguf format, this base model is not compatible with this version of ollama
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1856/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6843
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6843/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6843/comments
|
https://api.github.com/repos/ollama/ollama/issues/6843/events
|
https://github.com/ollama/ollama/issues/6843
| 2,532,001,593
|
I_kwDOJ0Z1Ps6W60c5
| 6,843
|
Help Needed! Connecting Ollama’s llama3:8b to External Platforms and Connection Refused Error
|
{
"login": "camilleconte8",
"id": 178808917,
"node_id": "U_kgDOCqhoVQ",
"avatar_url": "https://avatars.githubusercontent.com/u/178808917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/camilleconte8",
"html_url": "https://github.com/camilleconte8",
"followers_url": "https://api.github.com/users/camilleconte8/followers",
"following_url": "https://api.github.com/users/camilleconte8/following{/other_user}",
"gists_url": "https://api.github.com/users/camilleconte8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/camilleconte8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/camilleconte8/subscriptions",
"organizations_url": "https://api.github.com/users/camilleconte8/orgs",
"repos_url": "https://api.github.com/users/camilleconte8/repos",
"events_url": "https://api.github.com/users/camilleconte8/events{/privacy}",
"received_events_url": "https://api.github.com/users/camilleconte8/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2024-09-17T19:55:07
| 2024-09-20T11:59:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Help Needed! Connecting Ollama’s llama3:8b to External Platforms and Connection Refused Error
I am new to development, and have a windows machine where I have set up a WSL2 environment with Ubuntu 22.04.4 LTS (GNU/Linux 5.15.153.1-microsoft-standard-WSL2 x86_64). I’ve been working on setting up Ollama, specifically running the **llama3:8b model** on my local machine.
For more context, I have been following this tutorial (https://blog.getwren.ai/how-to-use-meta-llama-3-to-query-mysql-database-using-ollama-on-your-machine-2c087b204e41). (I know they say to use the 70B llama3 model but my machine isn't powerful enough. I thought that was my problem at first, but later discovered it's not which I will get into).
So ollama works great when I interact with it via **localhost** and even within a **Docker container**. However, I’ve encountered significant challenges when attempting to connect it with external services like **Wren AI** and **Replit**, where I consistently run into connection errors, more specifically [Errno 111] Connection refused. Below is a detailed account my the setup and the issues I’m experiencing.
### 1. **Ollama Setup on My Local Machine**
First, I managed to successfully install and run Ollama's llama3:8b model on my local machine. Here’s a quick overview of my environment:
- **Ollama is running on localhost** (127.0.0.1) on port **11434** (default). When i visit http://localhost:11434/, it says "Ollama is running."
- The model is **llama3:8b**, an 8-billion parameter model from Meta AI, integrated through Ollama by the command ollama pull llama3:8b.
- Here are status logs after saying sudo systemctl status ollama:
<details>
● ollama.service - Ollama Service
Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2024-09-16 16:37:13 EDT; 19h ago
Main PID: 58329 (ollama)
Tasks: 15 (limit: 14999)
Memory: 20.9M
CGroup: /system.slice/ollama.service
└─58329 /usr/local/bin/ollama serve
Sep 16 16:37:13 camillecore ollama[58329]: time=2024-09-16T16:37:13.683-04:00 level=INFO source=images.go:782 msg="total blobs: 10"
Sep 16 16:37:13 camillecore ollama[58329]: time=2024-09-16T16:37:13.683-04:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0"
Sep 16 16:37:13 camillecore ollama[58329]: time=2024-09-16T16:37:13.683-04:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
Sep 16 16:37:13 camillecore ollama[58329]: time=2024-09-16T16:37:13.684-04:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama53233405/runners
Sep 16 16:37:16 camillecore ollama[58329]: time=2024-09-16T16:37:16.542-04:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11]"
Sep 16 16:37:16 camillecore ollama[58329]: time=2024-09-16T16:37:16.542-04:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
Sep 16 16:37:21 camillecore ollama[58329]: time=2024-09-16T16:37:21.239-04:00 level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
Sep 16 16:37:21 camillecore ollama[58329]: time=2024-09-16T16:37:21.239-04:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="19.4 GiB" available="17.1 GiB"
Sep 16 16:37:28 camillecore ollama[58329]: [GIN] 2024/09/16 - 16:37:28 | 200 | 821.899µs | 127.0.0.1 | GET "/api/tags"
Sep 16 16:37:29 camillecore ollama[58329]: [GIN] 2024/09/16 - 16:37:29 | 200 | 1.216507046s | 127.0.0.1 | POST "/api/pull"
</details>
- This is the configuration of my ollama.service file:
<details>
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/mnt/c/Windows/system32:/mnt/c/Windows:/mnt/c/Windows/System32/Wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0/:/mnt/c/Windows/System32/OpenSSH/:/Docker/host/bin:/mnt/c/Users/camil/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/camil/AppData/Local/Programs/Microsoft VS Code/bin:/snap/bin"
Environment="OLLAMA_MODELS=/usr/share/ollama/.ollama/models"
Environment="OLLAMA_HOST=127.0.0.1:11434"
Environment="OLLAMA_ORIGINS=http://127.0.0.1:11434"
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
Environment="CUDA_VISIBLE_DEVICES="
[Install]
WantedBy=default.target
</details>
**Modifications I've made to ollama.service:**
<details>
a. Added Environment="OLLAMA_HOST=127.0.0.1:11434" and Environment="OLLAMA_ORIGINS=http://127.0.0.1:11434". I found this (https://github.com/ollama/ollama/issues/2132) issue where one person had the solution of adding these, but with 0.0.0.0 IP address instead of 127.0.0.1. I tried that as well and have had the same errors.
b. added Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0" and Environment="CUDA_VISIBLE_DEVICES=", which were suggestions from another person in that chat (the override part) and ChatGPT (the visible devices part) Again, same error.
c. I also broke down the 11434 firewall with sudo ufw allow 11434, then sudo ufw reload, then sudo systemctl restart ollama.service.
</details>
- Here are various commands I've run to confirm Ollama is functioning on my machine:
<details>
camilleconte@camillecore:~$ sudo lsof -i :11434
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ollama 34328 ollama 3u IPv4 728857 0t0 TCP localhost:11434 (LISTEN)
camilleconte@camillecore:~$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3:8b 365c0bd3c000 6.2 GB 100% CPU 4 minutes from now
camilleconte@camillecore:~$ sudo lsof -i :11434
[sudo] password for camilleconte:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ollama 58329 ollama 3u IPv4 2813691 0t0 TCP localhost:11434 (LISTEN)
</details>
- Here I show that I can interact with Ollama through HTTP requests in my terminal using `curl`:
<details>
camilleconte@camillecore:~$ curl -X POST http://localhost:11434/api/generate \
-H "Content-Type: application/json" \
-d '{
"model": "llama3:8b",
"prompt": "What is the capital of France?",
"temperature": 0.7
}'
Response:
{"model":"llama3:8b","created_at":"2024-09-17T16:26:43.660429962Z","response":"The","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T16:26:43.895378292Z","response":" capital","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T16:26:44.14832767Z","response":" of","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T16:26:44.403461964Z","response":" France","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T16:26:44.657318964Z","response":" is","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T16:26:44.910058883Z","response":" Paris","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T16:26:45.176020402Z","response":".","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T16:26:45.44038917Z","response":"","done":true,"done_reason":"stop","context":[128006,882,128007,271,3923,374,279,6864,315,9822,30,128009,128006,78191,128007,271,791,6864,315,9822,374,12366,13],"total_duration":8302556474,"load_duration":4573576269,"prompt_eval_count":17,"prompt_eval_duration":1893859000,"eval_count":8,"eval_duration":1780122000}
</details>
As you can see This interaction works perfectly and returns the correct responses directly in the terminal. Now I will show the external connections I've been trying to make that have failed.
### 2. **Issues with Connecting to external services**
While interacting with Ollama locally works well, I’ve been facing persistent issues when trying to integrate it with external services, namely **Wren AI** and **Replit**.
- **Wren AI**: Wren AI is intended to integrate with a local or external LLM host, in this case, my Ollama instance. However, whenever I try to connect Wren AI to the **localhost:11434** port where Ollama is running, I receive connection errors indicating that the API cannot be reached. These errors typically take the form of **"Connection refused"** errors, as Wren AI repeatedly fails to establish communication with the Ollama host. Following this (https://blog.getwren.ai/how-to-use-meta-llama-3-to-query-mysql-database-using-ollama-on-your-machine-2c087b204e41) tutorial, here are the **relevant files** in the docker project directory, where a container called wren-ai-service-1 forwards my ai service to port 5555:
1. .env.ai
<details>
## LLM
LLM_PROVIDER=ollama_llm # openai_llm, azure_openai_llm, ollama_llm
GENERATION_MODEL=llama3:8b
GENERATION_MODEL_KWARGS='{"temperature": 0}'
# openai or openai-api-compatible
#LLM_OPENAI_API_KEY=sk-xxxx
#LLM_OPENAI_API_BASE=https://api.openai.com/v1
# azure_openai
#LLM_AZURE_OPENAI_API_KEY=
#LLM_AZURE_OPENAI_API_BASE=
#LLM_AZURE_OPENAI_VERSION=
# ollama
LLM_OLLAMA_URL=http://host.docker.internal:11434
## EMBEDDER
EMBEDDER_PROVIDER=ollama_embedder # openai_embedder, azure_openai_embedder, ollama_embedder
# supported embedding models providers by qdrant: https://qdrant.tech/documentation/embeddings/
EMBEDDING_MODEL=nomic-embed-text
EMBEDDING_MODEL_DIMENSION=768
# openai or openai-api-compatible
#EMBEDDER_OPENAI_API_KEY=sk-xxxx
#EMBEDDER_OPENAI_API_BASE=https://api.openai.com/v1
# azure_openai
#EMBEDDER_AZURE_OPENAI_API_KEY=
#EMBEDDER_AZURE_OPENAI_API_BASE=
#EMBEDDER_AZURE_OPENAI_VERSION=
# ollama
EMBEDDER_OLLAMA_URL=http://host.docker.internal:11434
## DOCUMENT_STORE
DOCUMENT_STORE_PROVIDER=qdrant
QDRANT_HOST=qdrant
</details>
2. .env
<details>
COMPOSE_PROJECT_NAME=wrenai
PLATFORM=linux/amd64
PROJECT_DIR=/home/camilleconte/.wrenai
# service port
WREN_ENGINE_PORT=8080
WREN_ENGINE_SQL_PORT=7432
WREN_AI_SERVICE_PORT=5555
WREN_UI_PORT=3000
IBIS_SERVER_PORT=8000
# service endpoint (for docker-compose-dev.yaml file)
WREN_UI_ENDPOINT=http://wren-ui:3000
# LLM
#LLM_OPENAI_API_KEY=
#EMBEDDER_OPENAI_API_KEY=
LLM_PROVIDER=ollama_llm # openai_llm, azure_openai_llm, ollama_llm
GENERATION_MODEL_KWARGS='{"temperature": 0}'
GENERATION_MODEL=llama3:8b # gpt-4o-mini, gpt-4o, gpt-4-turbo, gpt-3.5-turbo
LLM_OLLAMA_URL=http://host.docker.internal:11434
## EMBEDDER
EMBEDDER_PROVIDER=ollama_embedder # openai_embedder, azure_openai_embedder, ollama_embedder
# supported embedding models providers by qdrant: https://qdrant.tech/documentation/embeddings/
EMBEDDING_MODEL=nomic-embed-text
EMBEDDING_MODEL_DIMENSION=768
# version
# CHANGE THIS TO THE LATEST VERSION
WREN_PRODUCT_VERSION=0.7.5
WREN_ENGINE_VERSION=0.9.0
WREN_AI_SERVICE_VERSION=0.8.0
IBIS_SERVER_VERSION=0.9.0
WREN_UI_VERSION=0.9.2
WREN_BOOTSTRAP_VERSION=0.1.5
# AI service related env variables
AI_SERVICE_ENABLE_TIMER=
AI_SERVICE_LOGGING_LEVEL=INFO
SHOULD_FORCE_DEPLOY=
QDRANT_HOST=qdrant
# user id (uuid v4)
USER_UUID=1aa9be21-4235-4723-9632-6d1645571769
# for other services
POSTHOG_API_KEY=phc_nhF32aj4xHXOZb0oqr2cn4Oy9uiWzz6CCP4KZmRq9aE
POSTHOG_HOST=https://app.posthog.com
TELEMETRY_ENABLED=true
# the port exposes to the host
# OPTIONAL: change the port if you have a conflict
HOST_PORT=3000
AI_SERVICE_FORWARD_PORT=5555
</details>
3. docker-compose.yaml
<details>
version: "3.8"
volumes:
data:
networks:
wren:
driver: bridge
services:
bootstrap:
image: ghcr.io/canner/wren-bootstrap:${WREN_BOOTSTRAP_VERSION}
restart: on-failure
platform: ${PLATFORM}
environment:
DATA_PATH: /app/data
volumes:
- data:/app/data
command: /bin/sh /app/init.sh
wren-engine:
image: ghcr.io/canner/wren-engine:${WREN_ENGINE_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${WREN_ENGINE_PORT}
- ${WREN_ENGINE_SQL_PORT}
volumes:
- data:/usr/src/app/etc
networks:
- wren
depends_on:
- bootstrap
ibis-server:
image: ghcr.io/canner/wren-engine-ibis:${IBIS_SERVER_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${IBIS_SERVER_PORT}
environment:
WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT}
networks:
- wren
wren-ai-service:
image: ghcr.io/canner/wren-ai-service:${WREN_AI_SERVICE_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${WREN_AI_SERVICE_PORT}
ports:
- ${AI_SERVICE_FORWARD_PORT}:${WREN_AI_SERVICE_PORT}
environment:
LLM_PROVIDER: ollama_llm
LLM_OLLAMA_URL: http://host.docker.internal:11434
GENERATION_MODEL: llama3:8b
GENERATION_MODEL_KWARGS: '{"temperature": 0}'
EMBEDDER_PROVIDER: ollama_embedder
EMBEDDING_MODEL: nomic-embed-text
EMBEDDING_MODEL_DIMENSION: "768"
#EMBEDDER_OPENAI_API_KEY: ""
#EMBEDDER_AZURE_OPENAI_API_KEY: ""
WREN_UI_PORT: "3000"
WREN_UI_ENDPOINT: http://wren-ui:3000
WREN_AI_SERVICE_PORT: "5555"
ENABLE_TIMER: ""
LOGGING_LEVEL: INFO
PYTHONUNBUFFERED: 1
networks:
- wren
depends_on:
- wren-engine
- qdrant
qdrant:
image: qdrant/qdrant:v1.7.4
restart: on-failure
expose:
- 6333
- 6334
volumes:
- data:/qdrant/storage
networks:
- wren
wren-ui:
image: ghcr.io/canner/wren-ui:${WREN_UI_VERSION}
restart: on-failure
platform: ${PLATFORM}
environment:
DB_TYPE: sqlite
# /app is the working directory in the container
SQLITE_FILE: /app/data/db.sqlite3
WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT}
WREN_AI_ENDPOINT: http://wren-ai-service:${WREN_AI_SERVICE_PORT}
IBIS_SERVER_ENDPOINT: http://ibis-server:${IBIS_SERVER_PORT}
EMBEDDING_MODEL: ${EMBEDDING_MODEL}
EMBEDDING_MODEL_DIMENSION: ${EMBEDDING_MODEL_DIMENSION}
GENERATION_MODEL: ${GENERATION_MODEL}
# telemetry
WREN_ENGINE_PORT: ${WREN_ENGINE_PORT}
WREN_AI_SERVICE_VERSION: ${WREN_AI_SERVICE_VERSION}
WREN_UI_VERSION: ${WREN_UI_VERSION}
WREN_ENGINE_VERSION: ${WREN_ENGINE_VERSION}
USER_UUID: ${USER_UUID}
POSTHOG_API_KEY: ${POSTHOG_API_KEY}
POSTHOG_HOST: ${POSTHOG_HOST}
TELEMETRY_ENABLED: ${TELEMETRY_ENABLED}
# client side
NEXT_PUBLIC_USER_UUID: ${USER_UUID}
NEXT_PUBLIC_POSTHOG_API_KEY: ${POSTHOG_API_KEY}
NEXT_PUBLIC_POSTHOG_HOST: ${POSTHOG_HOST}
NEXT_PUBLIC_TELEMETRY_ENABLED: ${TELEMETRY_ENABLED}
# configs
WREN_PRODUCT_VERSION: ${WREN_PRODUCT_VERSION}
ports:
# HOST_PORT is the port you want to expose to the host machine
- ${HOST_PORT}:3000
volumes:
- data:/app/data
networks:
- wren
depends_on:
- wren-ai-service
- wren-engine
</details>
- When start wrenai on docker, it looks like this:

I can visit localhost3000 and it looks great, but then localhost5555 always says ERR_EMPTY_RESPONSE. The docker logs for the wren-ai-service-1 container look like this:
<details>
2024-09-16 14:56:01 This module is deprecated and will be removed in Hamilton 2.0 Please use `hamilton.async_driver` instead.
2024-09-16 14:56:02 INFO: Started server process [7]
2024-09-16 14:56:02 INFO: Waiting for application startup.
2024-09-16 14:56:02 2024-09-16 18:56:02,058 - wren-ai-service - INFO - Initializing providers... (utils.py:64)
2024-09-16 14:56:03 2024-09-16 18:56:03,095 - wren-ai-service - INFO - Registering provider: qdrant (loader.py:66)
2024-09-16 14:56:03 2024-09-16 18:56:03,356 - wren-ai-service - INFO - Registering provider: azure_openai_embedder (loader.py:66)
2024-09-16 14:56:03 2024-09-16 18:56:03,359 - wren-ai-service - INFO - Registering provider: ollama_embedder (loader.py:66)
2024-09-16 14:56:03 2024-09-16 18:56:03,360 - wren-ai-service - INFO - Registering provider: openai_embedder (loader.py:66)
2024-09-16 14:56:03 2024-09-16 18:56:03,361 - wren-ai-service - INFO - Registering provider: wren_ui (loader.py:66)
2024-09-16 14:56:03 2024-09-16 18:56:03,361 - wren-ai-service - INFO - Registering provider: wren_ibis (loader.py:66)
2024-09-16 14:56:03 2024-09-16 18:56:03,361 - wren-ai-service - INFO - Registering provider: wren_engine (loader.py:66)
2024-09-16 11:42:03 Waiting for wren-ai-service to start...
2024-09-16 14:55:59 Waiting for wren-ai-service to start...
2024-09-16 14:56:03 2024-09-16 18:56:03,368 - wren-ai-service - INFO - Registering provider: azure_openai_llm (loader.py:66)
2024-09-16 14:56:03 2024-09-16 18:56:03,372 - wren-ai-service - INFO - Registering provider: ollama_llm (loader.py:66)
2024-09-16 14:56:03 2024-09-16 18:56:03,372 - wren-ai-service - INFO - Registering provider: openai_llm (loader.py:66)
2024-09-16 14:56:03 2024-09-16 18:56:03,402 - wren-ai-service - INFO - Pulling Ollama model llama3:8b (loader.py:109)
2024-09-16 14:56:05 2024-09-16 18:56:05,211 - wren-ai-service - INFO - Pulling Ollama model llama3:8b: 100% (loader.py:116)
2024-09-16 14:56:05 2024-09-16 18:56:05,217 - wren-ai-service - INFO - Using Ollama LLM: llama3:8b (ollama.py:135)
2024-09-16 14:56:05 2024-09-16 18:56:05,217 - wren-ai-service - INFO - Using Ollama URL: http://host.docker.internal:11434 (ollama.py:136)
2024-09-16 14:56:05 ERROR: Traceback (most recent call last):
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
2024-09-16 14:56:05 yield
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 233, in handle_request
2024-09-16 14:56:05 resp = self._pool.handle_request(req)
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
2024-09-16 14:56:05 raise exc from None
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
2024-09-16 14:56:05 response = connection.handle_request(
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
2024-09-16 14:56:05 raise exc
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
2024-09-16 14:56:05 stream = self._connect(request)
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 122, in _connect
2024-09-16 14:56:05 stream = self._network_backend.connect_tcp(**kwargs)
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
2024-09-16 14:56:05 with map_exceptions(exc_map):
2024-09-16 14:56:05 File "/usr/local/lib/python3.12/contextlib.py", line 155, in __exit__
2024-09-16 14:56:05 self.gen.throw(value)
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
2024-09-16 14:56:05 raise to_exc(exc) from exc
2024-09-16 14:56:05 httpcore.ConnectError: [Errno 111] Connection refused
2024-09-16 14:56:05
2024-09-16 14:56:05 The above exception was the direct cause of the following exception:
2024-09-16 14:56:05
2024-09-16 14:56:05 Traceback (most recent call last):
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 734, in lifespan
2024-09-16 14:56:05 async with self.lifespan_context(app) as maybe_state:
2024-09-16 14:56:05 File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__
2024-09-16 14:56:05 return await anext(self.gen)
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/src/__main__.py", line 28, in lifespan
2024-09-16 14:56:05 container.init_globals()
2024-09-16 14:56:05 File "/src/globals.py", line 53, in init_globals
2024-09-16 14:56:05 llm_provider, embedder_provider, document_store_provider, engine = init_providers(
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/src/utils.py", line 68, in init_providers
2024-09-16 14:56:05 embedder_provider = loader.get_provider(
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/src/providers/embedder/ollama.py", line 170, in __init__
2024-09-16 14:56:05 pull_ollama_model(self._url, self._embedding_model)
2024-09-16 14:56:05 File "/src/providers/loader.py", line 107, in pull_ollama_model
2024-09-16 14:56:05 models = client.list()["models"]
2024-09-16 14:56:05 ^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/ollama/_client.py", line 333, in list
2024-09-16 14:56:05 return self._request('GET', '/api/tags').json()
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/ollama/_client.py", line 69, in _request
2024-09-16 14:56:05 response = self._client.request(method, url, **kwargs)
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 827, in request
2024-09-16 14:56:05 return self.send(request, auth=auth, follow_redirects=follow_redirects)
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 914, in send
2024-09-16 14:56:05 response = self._send_handling_auth(
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 942, in _send_handling_auth
2024-09-16 14:56:05 response = self._send_handling_redirects(
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
2024-09-16 14:56:05 response = self._send_single_request(request)
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1015, in _send_single_request
2024-09-16 14:56:05 response = transport.handle_request(request)
2024-09-16 14:56:05 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 232, in handle_request
2024-09-16 14:56:05 with map_httpcore_exceptions():
2024-09-16 14:56:05 File "/usr/local/lib/python3.12/contextlib.py", line 155, in __exit__
2024-09-16 14:56:05 self.gen.throw(value)
2024-09-16 14:56:05 File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
2024-09-16 14:56:05 raise mapped_exc(message) from exc
2024-09-16 14:56:05 httpx.ConnectError: [Errno 111] Connection refused
2024-09-16 14:56:05
2024-09-16 14:56:05 ERROR: Application startup failed. Exiting.
</details>
At first, I thought it was an issue with docker communicating with ollama, so I entered the wren-ai-service-1 container in my terminal( docker exec -it wrenai-wren-ai-service-1 /bin/bash). I wanted to see if i could verify the docker connection in there, so I did this and succeeded as you can see here:
<details>
root@1773b32d1441:/# curl http://host.docker.internal:11434/v1/models
{"object":"list","data":[{"id":"llama3:8b","object":"model","created":1726592861,"owned_by":"library"},{"id":"llama3:70b","object":"model","created":1724455596,"owned_by":"library"},{"id":"nomic-embed-text:latest","object":"model","created":1724455596,"owned_by":"library"}]}
root@1773b32d1441:/# curl -X POST http://host.docker.internal:11434/api/generate -H "Content-Type: application/json" -d '{
"model": "llama3:8b",
"prompt": "What is the capital of France?",
"temperature": 0.7
}'
{"model":"llama3:8b","created_at":"2024-09-17T17:15:43.734078981Z","response":"The","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T17:15:43.990105229Z","response":" capital","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T17:15:44.25267107Z","response":" of","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T17:15:44.513833801Z","response":" France","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T17:15:44.766417457Z","response":" is","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T17:15:45.037476835Z","response":" Paris","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T17:15:45.306243202Z","response":".","done":false}
{"model":"llama3:8b","created_at":"2024-09-17T17:15:45.557952927Z","response":"","done":true,"done_reason":"stop","context":[128006,882,128007,271,3923,374,279,6864,315,9822,30,128009,128006,78191,128007,271,791,6864,315,9822,374,12366,13],"total_duration":8320738087,"load_duration":4567994305,"prompt_eval_count":17,"prompt_eval_duration":1882077000,"eval_count":8,"eval_duration":1823834000}
</details>
Sure enough, the curl request to Ollama worked fine in the docker container through the docker host url. So I confirmed that I could access the ollama service just fine from both my environments; inside the Docker container and directly on the host machine. However, just to be thorough, I should mention that when I am inside the container in terminal and send the request to localhost:11434 instead of host.docker.internal:11434, it says "curl: (7) Failed to connect to localhost port 11434 after 4 ms: Couldn't connect to server".
So next, I thought the problem was with wrenAI, but then this happened:
- **Replit**: This was a random sidequest that ended up being pretty informative in the end. Basically I didn't like the output of the llama3:8b model in my terminal before I knew about the "ollama run llama3:8b" command (LOL) So I tried to use replit (a cloud-based development environment) to host its responses in a nicer format. These are the steps I took:
<details>
Step 1: Create a Replit Account
Step 2: Create a New Repl, Click "Create" in the top left corner of the dashboard. Choose Python as the programming language. Name the project (e.g., Ollama Chat), then click Create Repl.
Step 3: Set Up the API Call
Install Dependencies: In the Repl’s file explorer, create a requirements.txt file and add:
'requests'
Then I Created a main.py file and added this code to interact with Ollama:
import requests
import json
url = "http://localhost:11434/api/generate"
data = {
"model": "llama3:8b",
"prompt": "What is the capital of France?",
"temperature": 0.7
}
response = requests.post(url, headers={"Content-Type": "application/json"}, data=json.dumps(data))
print(response.json())
</details>
I ran the code, and I got these errors:
<details>
Traceback (most recent call last):
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/urllib3/connection.py", line 199, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/urllib3/connectionpool.py", line 789, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/urllib3/connectionpool.py", line 495, in _make_request
conn.request(
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/urllib3/connection.py", line 441, in request
self.endheaders()
File "/nix/store/f98g7xbckgqbkagdvpzc2r6lv3h1p9ki-python3-3.11.9/lib/python3.11/http/client.py", line 1298, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/nix/store/f98g7xbckgqbkagdvpzc2r6lv3h1p9ki-python3-3.11.9/lib/python3.11/http/client.py", line 1058, in _send_output
self.send(msg)
File "/nix/store/f98g7xbckgqbkagdvpzc2r6lv3h1p9ki-python3-3.11.9/lib/python3.11/http/client.py", line 996, in send
self.connect()
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/urllib3/connection.py", line 279, in connect
self.sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/urllib3/connection.py", line 214, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7b2fd58958d0>: Failed to establish a new connection: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/requests/adapters.py", line 667, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/urllib3/connectionpool.py", line 843, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/urllib3/util/retry.py", line 519, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7b2fd58958d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/ollama-8b-chat/main.py", line 15, in <module>
response = requests.post(url, headers={"Content-Type": "application/json"}, data=json.dumps(data))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/ollama-8b-chat/.pythonlibs/lib/python3.11/site-packages/requests/adapters.py", line 700, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7b2fd58958d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
</details>
As you can see, it threw the same [Errno 111] Connection refused. This error indicates that **localhost:11434** cannot be reached from Replit, which makes sense because Replit runs in the cloud and doesn’t have access to my local machine’s `localhost`. The same problem occurs in Wren AI because, like Replit, it's not running on my local machine but rather trying to connect to my local instance over HTTP. So this is what has convinced me that there's something wrong with my ollama service, as it does not seem to allow any external services to interact with it despite how it is functioning just fine on my machine locally. Note: it also gave the same errors when I changed the python script to http://host.docker.internal:11434/api/generate instead of localhost:11434.
### 3. **Suspicions and Additional Troubleshooting Attempts**
1. One of the first things I suspected was that Ollama’s default CORS (Cross-Origin Resource Sharing) policy might be preventing access from external sources like Replit and Wren AI, as by default, Ollama’s CORS policy is restricted to **localhost**. To resolve this, I tried editing the **Ollama systemd service** configuration like I described earlier. But again, despite making these changes, both Wren AI and Replit continued to throw the same **connection refused** errors.
2. I noticed that a lot of the error files are coming from a python "httpx" directory, and I have read on ollama's documentation faq (https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux) that one should "Avoid setting HTTP_PROXY. Ollama does not use HTTP for model pulls, only HTTPS. Setting HTTP_PROXY may interrupt client connections to the server." No idea id this is a rabbithole of relevance though.
On the subject of proxies, I know that there's an option to expose my ollama- like using a tool like **ngrok** to expose my locally running Ollama instance to the internet so that Wren AI and Replit can access it via an externally accessible URL, or deploying Ollama on a cloud server or a VPS. I do not want to expose my ollama server, and I feel like I shouldn't have to, right? Again, I am a beginner, so if any of you experts think this is my only option I would love to hear it from you.
###
Okay that is all! Any help, suggestions, comments etc would be greatly appreciated because I am feeling very stuck. Thanks!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6843/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7522
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7522/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7522/comments
|
https://api.github.com/repos/ollama/ollama/issues/7522/events
|
https://github.com/ollama/ollama/issues/7522
| 2,637,096,514
|
I_kwDOJ0Z1Ps6dLuZC
| 7,522
|
not able to download models from ollama behind proxy
|
{
"login": "anshika1234",
"id": 6309074,
"node_id": "MDQ6VXNlcjYzMDkwNzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6309074?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anshika1234",
"html_url": "https://github.com/anshika1234",
"followers_url": "https://api.github.com/users/anshika1234/followers",
"following_url": "https://api.github.com/users/anshika1234/following{/other_user}",
"gists_url": "https://api.github.com/users/anshika1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anshika1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anshika1234/subscriptions",
"organizations_url": "https://api.github.com/users/anshika1234/orgs",
"repos_url": "https://api.github.com/users/anshika1234/repos",
"events_url": "https://api.github.com/users/anshika1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/anshika1234/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 19
| 2024-11-06T05:40:47
| 2024-11-06T11:09:14
| 2024-11-06T11:07:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama run qwen:4b
pulling manifest
pulling 46bb65206e0e... 0% ▕ ▏ 3.3 MB/2.3 GB 1.4 KB/s 99h+^****
-------------------------------------------- Log content -------------------
ollama[719]: time=2024-11-06T10:08:16.817+05:30 level=INFO source=download.go:370 msg="8eeb52dfb3bb part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
-----------------------------------------------Settings-----------
https_proxy setup is done in ollama environment
Please suggest how to debug further.
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
0.3.14
|
{
"login": "anshika1234",
"id": 6309074,
"node_id": "MDQ6VXNlcjYzMDkwNzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6309074?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anshika1234",
"html_url": "https://github.com/anshika1234",
"followers_url": "https://api.github.com/users/anshika1234/followers",
"following_url": "https://api.github.com/users/anshika1234/following{/other_user}",
"gists_url": "https://api.github.com/users/anshika1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anshika1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anshika1234/subscriptions",
"organizations_url": "https://api.github.com/users/anshika1234/orgs",
"repos_url": "https://api.github.com/users/anshika1234/repos",
"events_url": "https://api.github.com/users/anshika1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/anshika1234/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7522/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8548
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8548/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8548/comments
|
https://api.github.com/repos/ollama/ollama/issues/8548/events
|
https://github.com/ollama/ollama/issues/8548
| 2,806,581,762
|
I_kwDOJ0Z1Ps6nSQoC
| 8,548
|
ollama create --quantize q4_K_M not working
|
{
"login": "hgKang02",
"id": 68604896,
"node_id": "MDQ6VXNlcjY4NjA0ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/68604896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hgKang02",
"html_url": "https://github.com/hgKang02",
"followers_url": "https://api.github.com/users/hgKang02/followers",
"following_url": "https://api.github.com/users/hgKang02/following{/other_user}",
"gists_url": "https://api.github.com/users/hgKang02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hgKang02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hgKang02/subscriptions",
"organizations_url": "https://api.github.com/users/hgKang02/orgs",
"repos_url": "https://api.github.com/users/hgKang02/repos",
"events_url": "https://api.github.com/users/hgKang02/events{/privacy}",
"received_events_url": "https://api.github.com/users/hgKang02/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-23T10:55:34
| 2025-01-24T17:51:25
| 2025-01-24T17:51:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Dear ollama authors,
I hope this issue finds you well. First of all, thank you for such a wonderful project. It helped me to play around with different agentic AI models and tasks.
However, the quantize command is not working for me when I try to quantize the llava-next model. I have created a Modelfile on the same folder where the safetensors are placed. So for instance, if the model's safetensors are placed at /user/agent-models/llava-next, inside this folder I have created the Modelfile with context
`FROM .`
Now when I run the `ollama create --quantize q4_K_M mymodel` I get
**gathering model components
Error: no Modelfile or safetensors files found**
Would this be because ollama does not support Llava architecture for quantize function? Or is there anything that I am doing wrong?
Thank you and it will be much appreciated if I can hear back about this problem.
Sincerely,
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.5.7
|
{
"login": "hgKang02",
"id": 68604896,
"node_id": "MDQ6VXNlcjY4NjA0ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/68604896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hgKang02",
"html_url": "https://github.com/hgKang02",
"followers_url": "https://api.github.com/users/hgKang02/followers",
"following_url": "https://api.github.com/users/hgKang02/following{/other_user}",
"gists_url": "https://api.github.com/users/hgKang02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hgKang02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hgKang02/subscriptions",
"organizations_url": "https://api.github.com/users/hgKang02/orgs",
"repos_url": "https://api.github.com/users/hgKang02/repos",
"events_url": "https://api.github.com/users/hgKang02/events{/privacy}",
"received_events_url": "https://api.github.com/users/hgKang02/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8548/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1757
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1757/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1757/comments
|
https://api.github.com/repos/ollama/ollama/issues/1757/events
|
https://github.com/ollama/ollama/pull/1757
| 2,061,817,312
|
PR_kwDOJ0Z1Ps5jBYGx
| 1,757
|
Update maid to use the right repo
|
{
"login": "danemadsen",
"id": 11537699,
"node_id": "MDQ6VXNlcjExNTM3Njk5",
"avatar_url": "https://avatars.githubusercontent.com/u/11537699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danemadsen",
"html_url": "https://github.com/danemadsen",
"followers_url": "https://api.github.com/users/danemadsen/followers",
"following_url": "https://api.github.com/users/danemadsen/following{/other_user}",
"gists_url": "https://api.github.com/users/danemadsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danemadsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danemadsen/subscriptions",
"organizations_url": "https://api.github.com/users/danemadsen/orgs",
"repos_url": "https://api.github.com/users/danemadsen/repos",
"events_url": "https://api.github.com/users/danemadsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/danemadsen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-01T23:00:44
| 2024-01-02T14:47:08
| 2024-01-02T14:47:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1757",
"html_url": "https://github.com/ollama/ollama/pull/1757",
"diff_url": "https://github.com/ollama/ollama/pull/1757.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1757.patch",
"merged_at": "2024-01-02T14:47:08"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1757/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8676
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8676/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8676/comments
|
https://api.github.com/repos/ollama/ollama/issues/8676/events
|
https://github.com/ollama/ollama/pull/8676
| 2,819,521,168
|
PR_kwDOJ0Z1Ps6Jbnv2
| 8,676
|
docs: update api.md with streaming with tools is enabled
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-29T23:01:05
| 2025-01-30T13:08:49
| 2025-01-29T23:14:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8676",
"html_url": "https://github.com/ollama/ollama/pull/8676",
"diff_url": "https://github.com/ollama/ollama/pull/8676.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8676.patch",
"merged_at": "2025-01-29T23:14:30"
}
|
Shoutout to @sixlive for finding this!
docs were outdated and didnt mention that we can now stream tools
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8676/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4579
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4579/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4579/comments
|
https://api.github.com/repos/ollama/ollama/issues/4579/events
|
https://github.com/ollama/ollama/issues/4579
| 2,311,187,240
|
I_kwDOJ0Z1Ps6Jweso
| 4,579
|
Redownloading model on run command after runner crash
|
{
"login": "TipuatGit",
"id": 87166372,
"node_id": "MDQ6VXNlcjg3MTY2Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/87166372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TipuatGit",
"html_url": "https://github.com/TipuatGit",
"followers_url": "https://api.github.com/users/TipuatGit/followers",
"following_url": "https://api.github.com/users/TipuatGit/following{/other_user}",
"gists_url": "https://api.github.com/users/TipuatGit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TipuatGit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TipuatGit/subscriptions",
"organizations_url": "https://api.github.com/users/TipuatGit/orgs",
"repos_url": "https://api.github.com/users/TipuatGit/repos",
"events_url": "https://api.github.com/users/TipuatGit/events{/privacy}",
"received_events_url": "https://api.github.com/users/TipuatGit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-05-22T18:14:48
| 2024-11-05T23:14:33
| 2024-11-05T23:14:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I downloaded llama3 with the `ollama run llama3` command, and after it downloaded, first it failed with the error:
`Error: llama runner process has terminated: exit status 0xc0000005`
Then to see if the error reproduces, I ran `ollama run llama3` but it started to download it all over again. I don't understand why its doing that. And its repeatedly doing that, everytime I run the command it just goes to redownloading.
Note: I changed model directory by creating environment variable `OLLAMA_MODELS` as per the instructions in F.A.Qs. Also, model is in both the C drive and my other drive that I chose.
Foremost, I would like it to stop redownloading and use what is on my system already. That is top priority.
Please help guys.
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.38
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4579/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4579/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4713
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4713/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4713/comments
|
https://api.github.com/repos/ollama/ollama/issues/4713/events
|
https://github.com/ollama/ollama/issues/4713
| 2,324,429,228
|
I_kwDOJ0Z1Ps6Ki_ms
| 4,713
|
Codestral doesn't output correct response
|
{
"login": "jasonhotsauce",
"id": 3296551,
"node_id": "MDQ6VXNlcjMyOTY1NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3296551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jasonhotsauce",
"html_url": "https://github.com/jasonhotsauce",
"followers_url": "https://api.github.com/users/jasonhotsauce/followers",
"following_url": "https://api.github.com/users/jasonhotsauce/following{/other_user}",
"gists_url": "https://api.github.com/users/jasonhotsauce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jasonhotsauce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jasonhotsauce/subscriptions",
"organizations_url": "https://api.github.com/users/jasonhotsauce/orgs",
"repos_url": "https://api.github.com/users/jasonhotsauce/repos",
"events_url": "https://api.github.com/users/jasonhotsauce/events{/privacy}",
"received_events_url": "https://api.github.com/users/jasonhotsauce/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2024-05-30T00:19:44
| 2024-11-17T22:24:52
| 2024-11-17T22:24:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Example:
```
>>> write a python function to calculate fibonacci sequence
[control_8][control_11][control_19][control_35][control_18][control_11]▅[control_20][control_20][control_20][TOOL_RESULTS][control_27][control_32][control_20][control_8][control_11][control_19][control_35][control_18][control_11]▅[control_20][control_20][control_20][TOOL_RESULTS][control_27][control_32][control_20][control_32][control_19][control_21][control_26][control_19][/TOOL_RESULTS][control_11][control_30][control_16][control_14]
>>> /bye
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.39
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4713/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4713/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/543
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/543/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/543/comments
|
https://api.github.com/repos/ollama/ollama/issues/543/events
|
https://github.com/ollama/ollama/issues/543
| 1,899,565,254
|
I_kwDOJ0Z1Ps5xORDG
| 543
|
Error when loading model
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-09-16T20:55:32
| 2023-09-26T22:29:23
| 2023-09-26T22:29:22
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
ggml_metal_init: allocating
ggml_metal_init: loading '(null)'
ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=258 "The file name is invalid."
llama_new_context_with_model: ggml_metal_init() failed
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/543/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6900
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6900/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6900/comments
|
https://api.github.com/repos/ollama/ollama/issues/6900/events
|
https://github.com/ollama/ollama/pull/6900
| 2,539,909,697
|
PR_kwDOJ0Z1Ps58OjqX
| 6,900
|
CI: win arm artifact dist dir
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-09-21T02:15:54
| 2024-09-21T02:16:23
| 2024-09-21T02:16:19
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6900",
"html_url": "https://github.com/ollama/ollama/pull/6900",
"diff_url": "https://github.com/ollama/ollama/pull/6900.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6900.patch",
"merged_at": "2024-09-21T02:16:18"
}
|
The upload artifact is missing the dist prefix since all payloads are in the same directory, so restore the prefix on download.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6900/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3140
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3140/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3140/comments
|
https://api.github.com/repos/ollama/ollama/issues/3140/events
|
https://github.com/ollama/ollama/issues/3140
| 2,186,539,679
|
I_kwDOJ0Z1Ps6CU_Kf
| 3,140
|
[Win11]求助,求助,求助。
|
{
"login": "taurusduan",
"id": 30854760,
"node_id": "MDQ6VXNlcjMwODU0NzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/30854760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taurusduan",
"html_url": "https://github.com/taurusduan",
"followers_url": "https://api.github.com/users/taurusduan/followers",
"following_url": "https://api.github.com/users/taurusduan/following{/other_user}",
"gists_url": "https://api.github.com/users/taurusduan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taurusduan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taurusduan/subscriptions",
"organizations_url": "https://api.github.com/users/taurusduan/orgs",
"repos_url": "https://api.github.com/users/taurusduan/repos",
"events_url": "https://api.github.com/users/taurusduan/events{/privacy}",
"received_events_url": "https://api.github.com/users/taurusduan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-03-14T14:34:59
| 2024-04-17T22:56:28
| 2024-04-17T22:56:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
刚使用的小白,Ollama 的 API 自动在后台运行,服务于http://localhost:11434. 工具和应用程序可以连接到它,无需任何额外的设置。
我的服务器局域网地址为192.168.1.2,不知道为什么http://192.168.1.2:11434无法访问。
希望大佬能够帮帮我,谢谢啊。
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3140/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7442
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7442/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7442/comments
|
https://api.github.com/repos/ollama/ollama/issues/7442/events
|
https://github.com/ollama/ollama/pull/7442
| 2,626,076,227
|
PR_kwDOJ0Z1Ps6AfBFV
| 7,442
|
feat(auth): Enhance authentication package with improved error handling and security
|
{
"login": "Rekt-Developer",
"id": 186061827,
"node_id": "U_kgDOCxcUAw",
"avatar_url": "https://avatars.githubusercontent.com/u/186061827?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rekt-Developer",
"html_url": "https://github.com/Rekt-Developer",
"followers_url": "https://api.github.com/users/Rekt-Developer/followers",
"following_url": "https://api.github.com/users/Rekt-Developer/following{/other_user}",
"gists_url": "https://api.github.com/users/Rekt-Developer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rekt-Developer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rekt-Developer/subscriptions",
"organizations_url": "https://api.github.com/users/Rekt-Developer/orgs",
"repos_url": "https://api.github.com/users/Rekt-Developer/repos",
"events_url": "https://api.github.com/users/Rekt-Developer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rekt-Developer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2024-10-31T06:37:27
| 2024-10-31T06:38:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7442",
"html_url": "https://github.com/ollama/ollama/pull/7442",
"diff_url": "https://github.com/ollama/ollama/pull/7442.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7442.patch",
"merged_at": null
}
|
This PR enhances the authentication package with improved error handling, security, and maintainability.
Key Changes:
• Introduces custom error types for better error handling
• Adds comprehensive documentation and logging
• Improves input validation and security checks
• Implements proper context handling
• Enhances code structure and maintainability
Security Considerations:
• Better file permission handling
• Improved key validation
• Enhanced error messages that don't leak sensitive info
Breaking Changes: None
Testing:
All existing functionality remains unchanged while adding more robust error handling.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7442/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8380
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8380/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8380/comments
|
https://api.github.com/repos/ollama/ollama/issues/8380/events
|
https://github.com/ollama/ollama/pull/8380
| 2,781,383,457
|
PR_kwDOJ0Z1Ps6HZc9D
| 8,380
|
make the modelfile path relative for `ollama create`
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-10T23:11:50
| 2025-01-11T00:14:09
| 2025-01-11T00:14:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8380",
"html_url": "https://github.com/ollama/ollama/pull/8380",
"diff_url": "https://github.com/ollama/ollama/pull/8380.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8380.patch",
"merged_at": "2025-01-11T00:14:08"
}
|
Fixes #8353
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8380/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2882
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2882/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2882/comments
|
https://api.github.com/repos/ollama/ollama/issues/2882/events
|
https://github.com/ollama/ollama/issues/2882
| 2,164,904,751
|
I_kwDOJ0Z1Ps6BCdMv
| 2,882
|
Mixtral on Ollama, Nvidia RTX 3090 24G vs Nvidia A5000 24G : A Comparative Experience
|
{
"login": "nejib1",
"id": 10485460,
"node_id": "MDQ6VXNlcjEwNDg1NDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/10485460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nejib1",
"html_url": "https://github.com/nejib1",
"followers_url": "https://api.github.com/users/nejib1/followers",
"following_url": "https://api.github.com/users/nejib1/following{/other_user}",
"gists_url": "https://api.github.com/users/nejib1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nejib1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nejib1/subscriptions",
"organizations_url": "https://api.github.com/users/nejib1/orgs",
"repos_url": "https://api.github.com/users/nejib1/repos",
"events_url": "https://api.github.com/users/nejib1/events{/privacy}",
"received_events_url": "https://api.github.com/users/nejib1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-02T17:44:17
| 2024-05-16T23:21:22
| 2024-05-16T23:21:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
This is not an issue, it can be closed immediately, I just wanted to share my experience using Mixtral 26G on Ollama, comparing
Nvidia RTX 3090 and the Nvidia RTX A5000 on the same hardware, a SuperMicro 1028GR-TR server. Here's what I found:
**Speed:** I didn't notice any difference in speed, both GPUs perform similarly in this regard.
**Temperature:** The RTX 3090 runs significantly hotter compared to the A5000. Throughout my tests, the A5000 remained impressively cool.
**Fan Speed:** The RTX 3090's fans run at a minimum of 65%, whereas the A5000's fans operate at 30%, which is another point in favor of the A5000.
**Power Consumption:** The A5000 is more energy-efficient, consuming up to 230 watts at peak, whereas the RTX 3090 can consume up to 350 watts.
**Conclusion:** For AI purposes (Ollama in any case), **The A5000 is clearly superior** in all the aspects I tested.
It offers comparable speed with significantly better temperature management, lower fan speeds, and reduced power consumption.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2882/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2882/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1460
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1460/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1460/comments
|
https://api.github.com/repos/ollama/ollama/issues/1460/events
|
https://github.com/ollama/ollama/issues/1460
| 2,034,890,388
|
I_kwDOJ0Z1Ps55SfaU
| 1,460
|
Getting the GPU running in WSL2?
|
{
"login": "gerroon",
"id": 8519469,
"node_id": "MDQ6VXNlcjg1MTk0Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8519469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gerroon",
"html_url": "https://github.com/gerroon",
"followers_url": "https://api.github.com/users/gerroon/followers",
"following_url": "https://api.github.com/users/gerroon/following{/other_user}",
"gists_url": "https://api.github.com/users/gerroon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gerroon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerroon/subscriptions",
"organizations_url": "https://api.github.com/users/gerroon/orgs",
"repos_url": "https://api.github.com/users/gerroon/repos",
"events_url": "https://api.github.com/users/gerroon/events{/privacy}",
"received_events_url": "https://api.github.com/users/gerroon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 16
| 2023-12-11T05:59:00
| 2024-10-19T01:25:02
| 2023-12-12T17:01:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi
I am running it under WSL2. It is telling me that it cant fing the GPU. Is anyone running it under WSL with GPU? I have a 3080.
```
>>> The Ollama API is now available at 0.0.0.0:11434.
>>> Install complete. Run "ollama" from the command line.
WARNING: No NVIDIA GPU detected. Ollama will run in CPU-only mode.
>>> The Ollama API is now available at 0.0.0.0:11434.
>>> Install complete. Run "ollama" from the command line.
```
|
{
"login": "gerroon",
"id": 8519469,
"node_id": "MDQ6VXNlcjg1MTk0Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8519469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gerroon",
"html_url": "https://github.com/gerroon",
"followers_url": "https://api.github.com/users/gerroon/followers",
"following_url": "https://api.github.com/users/gerroon/following{/other_user}",
"gists_url": "https://api.github.com/users/gerroon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gerroon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerroon/subscriptions",
"organizations_url": "https://api.github.com/users/gerroon/orgs",
"repos_url": "https://api.github.com/users/gerroon/repos",
"events_url": "https://api.github.com/users/gerroon/events{/privacy}",
"received_events_url": "https://api.github.com/users/gerroon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1460/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/366
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/366/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/366/comments
|
https://api.github.com/repos/ollama/ollama/issues/366/events
|
https://github.com/ollama/ollama/pull/366
| 1,854,154,041
|
PR_kwDOJ0Z1Ps5YHOO8
| 366
|
adding link to models directly available on ollama
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-17T02:50:04
| 2023-08-17T02:53:27
| 2023-08-17T02:53:27
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/366",
"html_url": "https://github.com/ollama/ollama/pull/366",
"diff_url": "https://github.com/ollama/ollama/pull/366.diff",
"patch_url": "https://github.com/ollama/ollama/pull/366.patch",
"merged_at": "2023-08-17T02:53:27"
}
|
- adding link to models directly available on ollama
- ability to push your own models to the library will come in the future
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/366/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/418
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/418/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/418/comments
|
https://api.github.com/repos/ollama/ollama/issues/418/events
|
https://github.com/ollama/ollama/issues/418
| 1,868,079,887
|
I_kwDOJ0Z1Ps5vWKMP
| 418
|
Use with Continue.dev plugin in VSCodium seems broken (Linux)
|
{
"login": "matbgn",
"id": 13169819,
"node_id": "MDQ6VXNlcjEzMTY5ODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/13169819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matbgn",
"html_url": "https://github.com/matbgn",
"followers_url": "https://api.github.com/users/matbgn/followers",
"following_url": "https://api.github.com/users/matbgn/following{/other_user}",
"gists_url": "https://api.github.com/users/matbgn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matbgn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matbgn/subscriptions",
"organizations_url": "https://api.github.com/users/matbgn/orgs",
"repos_url": "https://api.github.com/users/matbgn/repos",
"events_url": "https://api.github.com/users/matbgn/events{/privacy}",
"received_events_url": "https://api.github.com/users/matbgn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2023-08-26T12:24:39
| 2023-08-30T21:14:23
| 2023-08-30T21:14:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I cannot get ollama server communicating with Continue plugin on VSCodium.
Continue still use ChatGPT API instead of local one. Here some context:
~/.continue/config.py
```
"""
This is the Continue configuration file.
If you aren't getting strong typing on these imports,
be sure to select the Python interpreter in ~/.continue/server/env.
"""
import subprocess
from continuedev.src.continuedev.core.main import Step
from continuedev.src.continuedev.core.sdk import ContinueSDK
from continuedev.src.continuedev.core.config import CustomCommand, SlashCommand, ContinueConfig
from continuedev.src.continuedev.plugins.context_providers.github import GitHubIssuesContextProvider
from continuedev.src.continuedev.plugins.context_providers.google import GoogleContextProvider
from continuedev.src.continuedev.libs.llm.ollama import Ollama
class CommitMessageStep(Step):
"""
This is a Step, the building block of Continue.
It can be used below as a slash command, so that
run will be called when you type '/commit'.
"""
async def run(self, sdk: ContinueSDK):
# Get the root directory of the workspace
dir = sdk.ide.workspace_directory
# Run git diff in that directory
diff = subprocess.check_output(
["git", "diff"], cwd=dir).decode("utf-8")
# Ask gpt-3.5-16k to write a commit message,
# and set it as the description of this step
self.description = await sdk.models.gpt3516k.complete(
f"{diff}\n\nWrite a short, specific (less than 50 chars) commit message about the above changes:")
config = ContinueConfig(
# If set to False, we will not collect any usage data
# See here to learn what anonymous data we collect: https://continue.dev/docs/telemetry
allow_anonymous_telemetry=False,
# GPT-4 is recommended for best results
# See options here: https://continue.dev/docs/customization#change-the-default-llm
models=Models(
default=Ollama(model="codellama")
)
# Set a system message with information that the LLM should always keep in mind
# E.g. "Please give concise answers. Always respond in Spanish."
system_message=None,
# Set temperature to any value between 0 and 1. Higher values will make the LLM
# more creative, while lower values will make it more predictable.
temperature=0.5,
# Custom commands let you map a prompt to a shortened slash command
# They are like slash commands, but more easily defined - write just a prompt instead of a Step class
# Their output will always be in chat form
custom_commands=[CustomCommand(
name="test",
description="This is an example custom command. Use /config to edit it and create more",
prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
)],
# Slash commands let you run a Step from a slash command
slash_commands=[
# SlashCommand(
# name="commit",
# description="This is an example slash command. Use /config to edit it and create more",
# step=CommitMessageStep,
# )
],
# Context providers let you quickly select context by typing '@'
# Uncomment the following to
# - quickly reference GitHub issues
# - show Google search results to the LLM
context_providers=[
# GitHubIssuesContextProvider(
# repo_name="<your github username or organization>/<your repo name>",
# auth_token="<your github auth token>"
# ),
# GoogleContextProvider(
# serper_api_key="<your serper.dev api key>"
# )
]
)
```
Ollama starts correctly and just wait indefinitely for instructions
```
./ollama serve
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] GET / --> github.com/jmorganca/ollama/server.Serve.func1 (4 handlers)
[GIN-debug] HEAD / --> github.com/jmorganca/ollama/server.Serve.func2 (4 handlers)
[GIN-debug] POST /api/pull --> github.com/jmorganca/ollama/server.PullModelHandler (4 handlers)
[GIN-debug] POST /api/generate --> github.com/jmorganca/ollama/server.GenerateHandler (4 handlers)
[GIN-debug] POST /api/embeddings --> github.com/jmorganca/ollama/server.EmbeddingHandler (4 handlers)
[GIN-debug] POST /api/create --> github.com/jmorganca/ollama/server.CreateModelHandler (4 handlers)
[GIN-debug] POST /api/push --> github.com/jmorganca/ollama/server.PushModelHandler (4 handlers)
[GIN-debug] POST /api/copy --> github.com/jmorganca/ollama/server.CopyModelHandler (4 handlers)
[GIN-debug] GET /api/tags --> github.com/jmorganca/ollama/server.ListModelsHandler (4 handlers)
[GIN-debug] DELETE /api/delete --> github.com/jmorganca/ollama/server.DeleteModelHandler (4 handlers)
2023/08/26 14:17:00 routes.go:452: Listening on 127.0.0.1:11434
```
Continue statements are obviously coming from OpenAI

|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/418/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4740
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4740/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4740/comments
|
https://api.github.com/repos/ollama/ollama/issues/4740/events
|
https://github.com/ollama/ollama/pull/4740
| 2,326,939,542
|
PR_kwDOJ0Z1Ps5xFEDP
| 4,740
|
speed up tests by only building static lib
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-31T04:28:53
| 2024-05-31T04:43:16
| 2024-05-31T04:43:16
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4740",
"html_url": "https://github.com/ollama/ollama/pull/4740",
"diff_url": "https://github.com/ollama/ollama/pull/4740.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4740.patch",
"merged_at": "2024-05-31T04:43:16"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4740/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3017
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3017/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3017/comments
|
https://api.github.com/repos/ollama/ollama/issues/3017/events
|
https://github.com/ollama/ollama/issues/3017
| 2,177,047,045
|
I_kwDOJ0Z1Ps6BwxoF
| 3,017
|
how to set another dir of .ollama
|
{
"login": "gavinwang668",
"id": 129959476,
"node_id": "U_kgDOB78GNA",
"avatar_url": "https://avatars.githubusercontent.com/u/129959476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gavinwang668",
"html_url": "https://github.com/gavinwang668",
"followers_url": "https://api.github.com/users/gavinwang668/followers",
"following_url": "https://api.github.com/users/gavinwang668/following{/other_user}",
"gists_url": "https://api.github.com/users/gavinwang668/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gavinwang668/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gavinwang668/subscriptions",
"organizations_url": "https://api.github.com/users/gavinwang668/orgs",
"repos_url": "https://api.github.com/users/gavinwang668/repos",
"events_url": "https://api.github.com/users/gavinwang668/events{/privacy}",
"received_events_url": "https://api.github.com/users/gavinwang668/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-09T04:32:26
| 2024-03-13T11:03:16
| 2024-03-11T22:13:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The default dir .ollama is the user_home,I want to choose other dir ,how I can do
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3017/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6616
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6616/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6616/comments
|
https://api.github.com/repos/ollama/ollama/issues/6616/events
|
https://github.com/ollama/ollama/issues/6616
| 2,503,841,463
|
I_kwDOJ0Z1Ps6VPZa3
| 6,616
|
A100 shared GPU - Server not responding (always after some time where it works)
|
{
"login": "Ida-Ida",
"id": 59512406,
"node_id": "MDQ6VXNlcjU5NTEyNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/59512406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ida-Ida",
"html_url": "https://github.com/Ida-Ida",
"followers_url": "https://api.github.com/users/Ida-Ida/followers",
"following_url": "https://api.github.com/users/Ida-Ida/following{/other_user}",
"gists_url": "https://api.github.com/users/Ida-Ida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ida-Ida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ida-Ida/subscriptions",
"organizations_url": "https://api.github.com/users/Ida-Ida/orgs",
"repos_url": "https://api.github.com/users/Ida-Ida/repos",
"events_url": "https://api.github.com/users/Ida-Ida/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ida-Ida/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 16
| 2024-09-03T21:41:17
| 2024-12-21T04:05:13
| 2024-09-13T08:49:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
when running ollama, it hangs up after a few times calling "generate".
It shows no error or something, justt hangs up for hours until it is killed manually.
Stopping and then restarting ollama does not resolve the issue, only after restart it works again for some short time (about 20 `generate` calls).
Reinstalling ollama did not help, too.
Tested with llava:7b model.
My setup:
- ubuntu 24.04.1, as VM on a server with A100 GPU
- 32 GB RAM
(Also important to note: on my desktop ubuntu PC this issue does not appear, just on the VM)

A section from the journalctl log:
```
Sep 03 20:59:07 ada-gym ollama[1623]: llm_load_tensors: offloading 32 repeating layers to GPU
Sep 03 20:59:07 ada-gym ollama[1623]: llm_load_tensors: offloading non-repeating layers to GPU
Sep 03 20:59:07 ada-gym ollama[1623]: llm_load_tensors: offloaded 33/33 layers to GPU
Sep 03 20:59:07 ada-gym ollama[1623]: llm_load_tensors: CPU buffer size = 70.31 MiB
Sep 03 20:59:07 ada-gym ollama[1623]: llm_load_tensors: CUDA0 buffer size = 3847.55 MiB
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: n_ctx = 2048
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: n_batch = 512
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: n_ubatch = 512
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: flash_attn = 0
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: freq_base = 1000000.0
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: freq_scale = 1
Sep 03 20:59:43 ada-gym ollama[1623]: llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: graph nodes = 1030
Sep 03 20:59:43 ada-gym ollama[1623]: llama_new_context_with_model: graph splits = 2
Sep 03 21:01:19 ada-gym ollama[2737]: INFO [main] model loaded | tid="126876040495104" timestamp=1725397279
Sep 03 21:01:19 ada-gym ollama[1623]: time=2024-09-03T21:01:19.371Z level=INFO source=server.go:630 msg="llama runner started in 178.41 seconds"
Sep 03 21:02:59 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:02:59 | 500 | 4m39s | 127.0.0.1 | POST "/api/generate"
Sep 03 21:03:46 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:46 | 200 | 684.665122ms | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:47 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:47 | 200 | 616.762244ms | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:47 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:47 | 200 | 551.942963ms | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:48 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:48 | 200 | 561.448248ms | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:49 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:49 | 200 | 870.515355ms | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:50 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:50 | 200 | 1.383561874s | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:51 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:51 | 200 | 555.17719ms | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:51 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:51 | 200 | 590.355197ms | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:52 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:52 | 200 | 582.477293ms | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:53 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:53 | 200 | 641.736396ms | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:53 ada-gym ollama[1623]: [GIN] 2024/09/03 - 21:03:53 | 200 | 604.034325ms | 127.0.0.1 | POST "/api/pull"
Sep 03 21:03:53 ada-gym ollama[1623]: time=2024-09-03T21:03:53.908Z level=WARN source=types.go:509 msg="invalid option provided" option=keep_alive
Sep 03 21:03:53 ada-gym ollama[1623]: time=2024-09-03T21:03:53.908Z level=WARN source=sched.go:137 msg="multimodal models don't support parallel requests yet"
Sep 03 21:04:04 ada-gym ollama[1623]: time=2024-09-03T21:04:04.542Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 gpu=GPU-3dda594e-2d7a-11ef-8ccb-044f68c14296 parallel=1 available=19272630272 required="5.3 GiB
"
Sep 03 21:04:04 ada-gym ollama[1623]: time=2024-09-03T21:04:04.543Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[17.9 GiB]" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[5.3 GiB]" memory.weights.tota
l="3.9 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="181.0 MiB"
Sep 03 21:04:04 ada-gym ollama[1623]: time=2024-09-03T21:04:04.544Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama1525758512/runners/cuda_v12/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu
-layers 33 --mmproj /usr/share/ollama/.ollama/models/blobs/sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 --parallel 1 --port 34627"
Sep 03 21:04:04 ada-gym ollama[1623]: time=2024-09-03T21:04:04.544Z level=INFO source=sched.go:450 msg="loaded runners" count=1
Sep 03 21:04:04 ada-gym ollama[1623]: time=2024-09-03T21:04:04.544Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
Sep 03 21:04:04 ada-gym ollama[1623]: time=2024-09-03T21:04:04.544Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
Sep 03 21:04:04 ada-gym ollama[2828]: INFO [main] build info | build=1 commit="1e6f655" tid="135144095866880" timestamp=1725397444
Sep 03 21:04:04 ada-gym ollama[2828]: INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "
tid="135144095866880" timestamp=1725397444 total_threads=16
Sep 03 21:04:04 ada-gym ollama[2828]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="34627" tid="135144095866880" timestamp=1725397444
Sep 03 21:04:04 ada-gym ollama[1623]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Sep 03 21:04:04 ada-gym ollama[1623]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 03 21:04:04 ada-gym ollama[1623]: ggml_cuda_init: found 1 CUDA devices:
Sep 03 21:04:04 ada-gym ollama[1623]: Device 0: GRID A100-20C, compute capability 8.0, VMM: no
Sep 03 21:04:04 ada-gym ollama[1623]: time=2024-09-03T21:04:04.796Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
Sep 03 21:04:50 ada-gym ollama[1623]: llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 (version GGUF V3 (latest))
Sep 03 21:04:50 ada-gym ollama[1623]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Sep 03 21:04:50 ada-gym ollama[1623]: llama_model_loader: - kv 0: general.architecture str = llama
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.9
|
{
"login": "Ida-Ida",
"id": 59512406,
"node_id": "MDQ6VXNlcjU5NTEyNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/59512406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ida-Ida",
"html_url": "https://github.com/Ida-Ida",
"followers_url": "https://api.github.com/users/Ida-Ida/followers",
"following_url": "https://api.github.com/users/Ida-Ida/following{/other_user}",
"gists_url": "https://api.github.com/users/Ida-Ida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ida-Ida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ida-Ida/subscriptions",
"organizations_url": "https://api.github.com/users/Ida-Ida/orgs",
"repos_url": "https://api.github.com/users/Ida-Ida/repos",
"events_url": "https://api.github.com/users/Ida-Ida/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ida-Ida/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6616/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/6616/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8017
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8017/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8017/comments
|
https://api.github.com/repos/ollama/ollama/issues/8017/events
|
https://github.com/ollama/ollama/pull/8017
| 2,728,160,346
|
PR_kwDOJ0Z1Ps6EmWc6
| 8,017
|
server: more support for mixed-case model names
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-09T20:10:14
| 2024-12-11T23:30:01
| 2024-12-11T23:29:59
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8017",
"html_url": "https://github.com/ollama/ollama/pull/8017",
"diff_url": "https://github.com/ollama/ollama/pull/8017.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8017.patch",
"merged_at": "2024-12-11T23:29:59"
}
|
Fixes #7944
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8017/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8136
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8136/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8136/comments
|
https://api.github.com/repos/ollama/ollama/issues/8136/events
|
https://github.com/ollama/ollama/issues/8136
| 2,744,729,049
|
I_kwDOJ0Z1Ps6jmT3Z
| 8,136
|
The checksum of `ollama-darwin` is wrong at v0.5.3
|
{
"login": "suzuki-shunsuke",
"id": 13323303,
"node_id": "MDQ6VXNlcjEzMzIzMzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/13323303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suzuki-shunsuke",
"html_url": "https://github.com/suzuki-shunsuke",
"followers_url": "https://api.github.com/users/suzuki-shunsuke/followers",
"following_url": "https://api.github.com/users/suzuki-shunsuke/following{/other_user}",
"gists_url": "https://api.github.com/users/suzuki-shunsuke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suzuki-shunsuke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suzuki-shunsuke/subscriptions",
"organizations_url": "https://api.github.com/users/suzuki-shunsuke/orgs",
"repos_url": "https://api.github.com/users/suzuki-shunsuke/repos",
"events_url": "https://api.github.com/users/suzuki-shunsuke/events{/privacy}",
"received_events_url": "https://api.github.com/users/suzuki-shunsuke/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-12-17T11:52:32
| 2024-12-17T14:51:47
| 2024-12-17T14:51:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/ollama/ollama/releases/tag/v0.5.3
Download assets from GitHub Releases:
```sh
gh release download -R ollama/ollama v0.5.3
```
sha256sum.txt:
```
b5551feac44b903f13a043e6fe08a966b45830667ff150a310d51a742ce2562d ./ollama-windows-arm64.zip
ed93c82e6da9a02f56fc4f2ff6c3c3208bb78fd8181ac3062fa3c0bc3bb98478 ./ollama-linux-amd64.tgz
89245d1eaca5d4fb6f3024e99c20a681653b14fea25f3eaebf886b4c7fc1efcc ./ollama-darwin
9559e43581090b99a96e2991effb88f0ca2b0df852b35f670f89e1266d2fd3c5 ./ollama-linux-arm64-jetpack5.tgz
3cf7fdc1bde4fdfce8d0413f92456d74d9a369778edd6f6aa519082a48bd39d8 ./ollama-linux-amd64-rocm.tgz
257457df527bbbdf9d4ec43356e12fe6f68a2e3c19ce78a452051b7ad290208d ./ollama-windows-amd64.zip
df6faf6d71f0bbdf49ff724860df6abee422e124f78ab98c8da2564cc834ad75 ./ollama-linux-arm64.tgz
ec7c02a01e29315d21059fd468eea8b9bf02358c6a43adb109e92ca1a6a1b275 ./Ollama-darwin.zip
94826e0080b7ec03801ad69ccdb03bb3b2ad55ba41643047cee6bf388db17e12 ./ollama-linux-arm64-jetpack6.tgz
d7c2e0a0a4799f9a3a3e17d5789b33e003a48bbc365fdc16b39ea36252afc255 ./OllamaSetup.exe
```
Verify checksums. The verification of the checksum of ollama-darwin failed.
```console
$ cat sha256sum.txt| sha256sum -c
./ollama-windows-arm64.zip: OK
./ollama-linux-amd64.tgz: OK
./ollama-darwin: FAILED
./ollama-linux-arm64-jetpack5.tgz: OK
./ollama-linux-amd64-rocm.tgz: OK
./ollama-windows-amd64.zip: OK
./ollama-linux-arm64.tgz: OK
./Ollama-darwin.zip: OK
./ollama-linux-arm64-jetpack6.tgz: OK
./OllamaSetup.exe: OK
sha256sum: WARNING: 1 computed checksum did NOT match
```
Expected Checksum:
```
89245d1eaca5d4fb6f3024e99c20a681653b14fea25f3eaebf886b4c7fc1efcc ./ollama-darwin
```
Actual checksum:
```console
$ sha256sum ollama-darwin
44625729a961ff48c4e0f2fe324f6ab8622c86c0479b0cc6923a8755ca7a6004 ollama-darwin
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8136/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1482
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1482/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1482/comments
|
https://api.github.com/repos/ollama/ollama/issues/1482/events
|
https://github.com/ollama/ollama/issues/1482
| 2,038,057,480
|
I_kwDOJ0Z1Ps55ekoI
| 1,482
|
Permanently changing System prompt
|
{
"login": "luvchurchill",
"id": 46406654,
"node_id": "MDQ6VXNlcjQ2NDA2NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/46406654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luvchurchill",
"html_url": "https://github.com/luvchurchill",
"followers_url": "https://api.github.com/users/luvchurchill/followers",
"following_url": "https://api.github.com/users/luvchurchill/following{/other_user}",
"gists_url": "https://api.github.com/users/luvchurchill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luvchurchill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luvchurchill/subscriptions",
"organizations_url": "https://api.github.com/users/luvchurchill/orgs",
"repos_url": "https://api.github.com/users/luvchurchill/repos",
"events_url": "https://api.github.com/users/luvchurchill/events{/privacy}",
"received_events_url": "https://api.github.com/users/luvchurchill/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2023-12-12T15:47:42
| 2025-01-08T05:38:17
| 2023-12-12T22:53:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I want to change the system prompt, after I write my own ``/set system`` and I check it ``/show system`` it is what I changed it to.
The problem is that when I quit, it changes back to the default boring prompt.
Is there a way to change it permanently?
Thanks
|
{
"login": "luvchurchill",
"id": 46406654,
"node_id": "MDQ6VXNlcjQ2NDA2NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/46406654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luvchurchill",
"html_url": "https://github.com/luvchurchill",
"followers_url": "https://api.github.com/users/luvchurchill/followers",
"following_url": "https://api.github.com/users/luvchurchill/following{/other_user}",
"gists_url": "https://api.github.com/users/luvchurchill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luvchurchill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luvchurchill/subscriptions",
"organizations_url": "https://api.github.com/users/luvchurchill/orgs",
"repos_url": "https://api.github.com/users/luvchurchill/repos",
"events_url": "https://api.github.com/users/luvchurchill/events{/privacy}",
"received_events_url": "https://api.github.com/users/luvchurchill/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1482/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5716
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5716/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5716/comments
|
https://api.github.com/repos/ollama/ollama/issues/5716/events
|
https://github.com/ollama/ollama/pull/5716
| 2,410,134,883
|
PR_kwDOJ0Z1Ps51dfPs
| 5,716
|
Require cached prompt to match certain percentage.
|
{
"login": "rasodu",
"id": 13222196,
"node_id": "MDQ6VXNlcjEzMjIyMTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/13222196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rasodu",
"html_url": "https://github.com/rasodu",
"followers_url": "https://api.github.com/users/rasodu/followers",
"following_url": "https://api.github.com/users/rasodu/following{/other_user}",
"gists_url": "https://api.github.com/users/rasodu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rasodu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rasodu/subscriptions",
"organizations_url": "https://api.github.com/users/rasodu/orgs",
"repos_url": "https://api.github.com/users/rasodu/repos",
"events_url": "https://api.github.com/users/rasodu/events{/privacy}",
"received_events_url": "https://api.github.com/users/rasodu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-07-16T03:45:31
| 2024-07-22T22:04:28
| 2024-07-21T01:24:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5716",
"html_url": "https://github.com/ollama/ollama/pull/5716",
"diff_url": "https://github.com/ollama/ollama/pull/5716.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5716.patch",
"merged_at": null
}
|
We have three options:
1. Any number of characters are matched
- This is the current logic. The issue is even matching beginning of the prompt "<|user|>" leads to the selection of the slot. So, every request is using 1st slot.
2. All characters are matched
- We can require matching entire prompt. I use OpenWeb UI. It has functionality to edit last prompt. Even editing the last prompt will re-evaluate entire prompt.
3. Partial match
- This is what I have currently implemented in the PR. It will fix the bug that we are facing and also allow potentially changing the prompt without leading to full re-evaluation. We are matching 60% of the cached prompt but it could be any % value. Essentially, we are asking for re-evaluation if the prompt doesn't match at least 60% with the cached prompt.
Let me know if you have any other ideas to fix this issue.
|
{
"login": "rasodu",
"id": 13222196,
"node_id": "MDQ6VXNlcjEzMjIyMTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/13222196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rasodu",
"html_url": "https://github.com/rasodu",
"followers_url": "https://api.github.com/users/rasodu/followers",
"following_url": "https://api.github.com/users/rasodu/following{/other_user}",
"gists_url": "https://api.github.com/users/rasodu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rasodu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rasodu/subscriptions",
"organizations_url": "https://api.github.com/users/rasodu/orgs",
"repos_url": "https://api.github.com/users/rasodu/repos",
"events_url": "https://api.github.com/users/rasodu/events{/privacy}",
"received_events_url": "https://api.github.com/users/rasodu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5716/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/201
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/201/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/201/comments
|
https://api.github.com/repos/ollama/ollama/issues/201/events
|
https://github.com/ollama/ollama/issues/201
| 1,819,176,463
|
I_kwDOJ0Z1Ps5sbm4P
| 201
|
`ollama push` tag defaults to `latest`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-07-24T21:30:25
| 2023-08-23T17:46:58
| 2023-08-23T17:46:57
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/201/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5313
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5313/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5313/comments
|
https://api.github.com/repos/ollama/ollama/issues/5313/events
|
https://github.com/ollama/ollama/pull/5313
| 2,376,330,320
|
PR_kwDOJ0Z1Ps5zsDNu
| 5,313
|
Update llama.cpp submodule to `dd047b47`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-26T21:43:14
| 2024-06-26T21:45:41
| 2024-06-26T21:45:41
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5313",
"html_url": "https://github.com/ollama/ollama/pull/5313",
"diff_url": "https://github.com/ollama/ollama/pull/5313.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5313.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5313/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.