url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/1205
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1205/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1205/comments
https://api.github.com/repos/ollama/ollama/issues/1205/events
https://github.com/ollama/ollama/issues/1205
2,002,257,814
I_kwDOJ0Z1Ps53WAeW
1,205
/admin Page Auth Key not working
{ "login": "Asher9971", "id": 11883647, "node_id": "MDQ6VXNlcjExODgzNjQ3", "avatar_url": "https://avatars.githubusercontent.com/u/11883647?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Asher9971", "html_url": "https://github.com/Asher9971", "followers_url": "https://api.github.com/users/Asher9971/followers", "following_url": "https://api.github.com/users/Asher9971/following{/other_user}", "gists_url": "https://api.github.com/users/Asher9971/gists{/gist_id}", "starred_url": "https://api.github.com/users/Asher9971/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Asher9971/subscriptions", "organizations_url": "https://api.github.com/users/Asher9971/orgs", "repos_url": "https://api.github.com/users/Asher9971/repos", "events_url": "https://api.github.com/users/Asher9971/events{/privacy}", "received_events_url": "https://api.github.com/users/Asher9971/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-11-20T13:52:10
2023-12-08T23:30:07
2023-11-20T16:23:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
After a fresh install i need to input an Auth Key on the admin page. ![image](https://github.com/jmorganca/ollama/assets/11883647/a36825fe-27e7-4153-b8d3-be886fc7bcb9) I generated one and pasted it in the compose-file. `cheshire-cat-core: image: ghcr.io/cheshire-cat-ai/core:latest container_name: cheshire_cat_core #depends_on: # - cheshire-cat-vector-memory # - embedder # - ollama environment: - PYTHONUNBUFFERED=1 - WATCHFILES_FORCE_POLLING=true - CORE_HOST=${CORE_HOST:-localhost} - CORE_PORT=${CORE_PORT:-1865} - QDRANT_HOST=${QDRANT_HOST:-cheshire_cat_vector_memory} - QDRANT_PORT=${QDRANT_PORT:-6333} - CORE_USE_SECURE_PROTOCOLS=${CORE_USE_SECURE_PROTOCOLS:-} - LOG_LEVEL=${LOG_LEVEL:-WARNING} - API_KEY=${API_KEY:-9da0f457-cecd-459e-********} - DEBUG=${DEBUG:-true} - SAVE_MEMORY_SNAPSHOTS=${SAVE_MEMORY_SNAPSHOTS:-false} ports: - ${CORE_PORT:-1865}:80 volumes: - cheshire_cat_core_static:/app/cat/static - cheshire_cat_core_public:/app/cat/public - cheshire_cat_core_plugins:/app/cat/plugins restart: unless-stopped` also tried without any API_KEY Variable in compose-file also didn't worked and i got the attempt to input an auth key on the /admin page
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1205/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2323
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2323/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2323/comments
https://api.github.com/repos/ollama/ollama/issues/2323/events
https://github.com/ollama/ollama/issues/2323
2,114,580,332
I_kwDOJ0Z1Ps5-Ce9s
2,323
RFE: provide checksum for artefacts released
{ "login": "truatpasteurdotfr", "id": 8300215, "node_id": "MDQ6VXNlcjgzMDAyMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/8300215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/truatpasteurdotfr", "html_url": "https://github.com/truatpasteurdotfr", "followers_url": "https://api.github.com/users/truatpasteurdotfr/followers", "following_url": "https://api.github.com/users/truatpasteurdotfr/following{/other_user}", "gists_url": "https://api.github.com/users/truatpasteurdotfr/gists{/gist_id}", "starred_url": "https://api.github.com/users/truatpasteurdotfr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/truatpasteurdotfr/subscriptions", "organizations_url": "https://api.github.com/users/truatpasteurdotfr/orgs", "repos_url": "https://api.github.com/users/truatpasteurdotfr/repos", "events_url": "https://api.github.com/users/truatpasteurdotfr/events{/privacy}", "received_events_url": "https://api.github.com/users/truatpasteurdotfr/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-02-02T10:19:29
2024-03-14T19:53:26
2024-03-14T19:53:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, would it be possible to provide sha256 checksums/signature to verify the downloaded artefacts when a new version is released? Just to be sure that the executables have not modified. :D Thanks Tru
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2323/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2323/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5106
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5106/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5106/comments
https://api.github.com/repos/ollama/ollama/issues/5106/events
https://github.com/ollama/ollama/pull/5106
2,358,702,658
PR_kwDOJ0Z1Ps5yxFAP
5,106
Tighten up memory prediction logging
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-06-18T02:11:21
2024-06-18T16:24:50
2024-06-18T16:24:38
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5106", "html_url": "https://github.com/ollama/ollama/pull/5106", "diff_url": "https://github.com/ollama/ollama/pull/5106.diff", "patch_url": "https://github.com/ollama/ollama/pull/5106.patch", "merged_at": "2024-06-18T16:24:38" }
Prior to this change, we logged the memory prediction multiple times as the scheduler iterates to find a suitable configuration, which can be confusing since only the last log before the server starts is actually valid. This now logs once just before starting the server on the final configuration. It also reports what library instead of always saying "offloading to gpu" when using CPU. A few examples (non-debug regular logging level): ``` time=2024-06-17T19:07:15.507-07:00 level=INFO source=types.go:98 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="96.0 GiB" available="96.0 GiB" [GIN] 2024/06/17 - 19:07:33 | 200 | 313.875µs | 127.0.0.1 | HEAD "/" [GIN] 2024/06/17 - 19:07:33 | 200 | 2.473333ms | 127.0.0.1 | POST "/api/show" time=2024-06-17T19:07:33.438-07:00 level=INFO source=memory.go:303 msg="offload to metal" layers.requested=-1 layers.model=27 layers.offload=27 layers.split="" memory.available="[96.0 GiB]" memory.required.full="3.2 GiB" memory.required.partial="3.2 GiB" memory.required.kv="650.0 MiB" memory.required.allocations="[3.2 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="2.2 GiB" memory.weights.nonrepeating="103.8 MiB" memory.graph.full="157.0 MiB" memory.graph.partial="157.0 MiB" time=2024-06-17T19:07:33.439-07:00 level=INFO source=server.go:359 msg="starting llama server" cmd="/var/folders/hs/0tcx8spd1vv390h0j6jq5vq80000gn/T/ollama3083603568/runners/metal/ollama_llama_server --model /Users/daniel/.ollama/models/blobs/sha256-66002b78c70a22ab25e16cc9a1736c6cc6335398c7312e3eb33db202350afe66 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 27 --parallel 1 --port 63538" ``` ``` time=2024-06-18T02:09:26.404Z level=WARN source=gpu.go:225 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions" time=2024-06-18T02:09:26.405Z level=INFO source=types.go:98 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="31.3 GiB" available="30.4 GiB" [GIN] 2024/06/18 - 02:09:36 | 200 | 662.875µs | 127.0.0.1 | HEAD "/" [GIN] 2024/06/18 - 02:09:36 | 200 | 3.819958ms | 127.0.0.1 | POST "/api/show" time=2024-06-18T02:09:36.898Z level=INFO source=memory.go:303 msg="offload to cpu" layers.requested=-1 layers.model=27 layers.offload=0 layers.split="" memory.available="[30.4 GiB]" memory.required.full="2.7 GiB" memory.required.partial="0 B" memory.required.kv="650.0 MiB" memory.required.allocations="[2.7 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="2.2 GiB" memory.weights.nonrepeating="103.8 MiB" memory.graph.full="157.0 MiB" memory.graph.partial="177.2 MiB" time=2024-06-18T02:09:36.905Z level=INFO source=server.go:359 msg="starting llama server" cmd="/tmp/ollama284292296/runners/cpu/ollama_llama_server --model /root/.ollama/models/blobs/sha256-66002b78c70a22ab25e16cc9a1736c6cc6335398c7312e3eb33db202350afe66 --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 43251" ``` ``` time=2024-06-17T19:02:51.398-07:00 level=INFO source=types.go:98 msg="inference compute" id=GPU-1c750365-54dc-7082-7c6b-9dd953a68ab6 library=cuda compute=6.1 driver=12.3 name="NVIDIA GeForce GTX 1060 6GB" total="5.9 GiB" available="5.7 GiB" [GIN] 2024/06/17 - 19:02:57 | 200 | 28.835µs | 127.0.0.1 | HEAD "/" [GIN] 2024/06/17 - 19:02:57 | 200 | 453.399µs | 127.0.0.1 | POST "/api/show" time=2024-06-17T19:02:58.233-07:00 level=INFO source=memory.go:303 msg="offload to cuda" layers.requested=-1 layers.model=27 layers.offload=27 layers.split="" memory.available="[5.7 GiB]" memory.required.full="3.1 GiB" memory.required.partial="3.1 GiB" memory.required.kv="650.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="2.2 GiB" memory.weights.nonrepeating="103.8 MiB" memory.graph.full="157.0 MiB" memory.graph.partial="177.2 MiB" time=2024-06-17T19:02:58.233-07:00 level=INFO source=server.go:359 msg="starting llama server" cmd="/tmp/ollama3201791839/runners/cuda_v11/ollama_llama_server --model /home/daniel/.ollama/models/blobs/sha256-66002b78c70a22ab25e16cc9a1736c6cc6335398c7312e3eb33db202350afe66 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 27 --parallel 1 --port 43155" ```
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5106/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6034
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6034/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6034/comments
https://api.github.com/repos/ollama/ollama/issues/6034/events
https://github.com/ollama/ollama/issues/6034
2,434,236,468
I_kwDOJ0Z1Ps6RF4A0
6,034
can't import DarkIdol-Llama-3.1-Instruct-1.2-Uncensored:8b_Q8_0
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/taozhiyuai/followers", "following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}", "gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}", "starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions", "organizations_url": "https://api.github.com/users/taozhiyuai/orgs", "repos_url": "https://api.github.com/users/taozhiyuai/repos", "events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}", "received_events_url": "https://api.github.com/users/taozhiyuai/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-07-29T01:10:30
2024-08-02T07:34:08
2024-08-02T07:34:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 1. modelfile FROM ./DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored.Q8_0.gguf TEMPLATE "{{- if .Messages }} {{- if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|> {{- end }} {{- range .Messages }}<|start_header_id|>{{ .Role }}<|end_header_id|> {{ .Content }}<|eot_id|> {{- end }}<|start_header_id|>assistant<|end_header_id|> {{ else }} {{- if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}" PARAMETER stop <|start_header_id|> PARAMETER stop <|end_header_id|> PARAMETER stop <|eot_id|> taozhiyu@603e5f4a42f1 DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF % ollama create taozhiyuai/DarkIdol-Llama-3.1-Instruct-1.2-Uncensored:8b_Q8_0 -f modelfile transferring model data using existing layer sha256:9ad16d0a1bb54a5efcaa4dfbfde70254768724ec5f453934de4e688dca2a3070 using existing layer sha256:cdbcae7e69d520e3d5a17b4979c31331dfba959f36641a81576349875d682127 using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb creating new layer sha256:a67d6d41b0d6ca312152b392a024900ae436f3eee547dd5c914dadba0229bf8b writing manifest success taozhiyu@603e5f4a42f1 DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF % ollama run taozhiyuai/DarkIdol-Llama-3.1-Instruct-1.2-Uncensored:8b_Q8_0 Error: llama runner process has terminated: error:done_getting_tensors: wrong number of tensors; expected 292, got 291 ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.0
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/taozhiyuai/followers", "following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}", "gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}", "starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions", "organizations_url": "https://api.github.com/users/taozhiyuai/orgs", "repos_url": "https://api.github.com/users/taozhiyuai/repos", "events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}", "received_events_url": "https://api.github.com/users/taozhiyuai/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6034/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6034/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/162
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/162/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/162/comments
https://api.github.com/repos/ollama/ollama/issues/162/events
https://github.com/ollama/ollama/issues/162
1,815,859,719
I_kwDOJ0Z1Ps5sO9IH
162
Don't automatically start on startup / have an option to disable this
{ "login": "gregsadetsky", "id": 1017304, "node_id": "MDQ6VXNlcjEwMTczMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1017304?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gregsadetsky", "html_url": "https://github.com/gregsadetsky", "followers_url": "https://api.github.com/users/gregsadetsky/followers", "following_url": "https://api.github.com/users/gregsadetsky/following{/other_user}", "gists_url": "https://api.github.com/users/gregsadetsky/gists{/gist_id}", "starred_url": "https://api.github.com/users/gregsadetsky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gregsadetsky/subscriptions", "organizations_url": "https://api.github.com/users/gregsadetsky/orgs", "repos_url": "https://api.github.com/users/gregsadetsky/repos", "events_url": "https://api.github.com/users/gregsadetsky/events{/privacy}", "received_events_url": "https://api.github.com/users/gregsadetsky/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
26
2023-07-21T13:57:28
2025-01-23T17:15:04
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
EDIT: [there's a PR for it now](https://github.com/ollama/ollama/pull/7097)! --- Thanks for making this! I noticed that on macOS (I suppose it's the same on Windows), the app sets itself to open at login. This is done here: https://github.com/jmorganca/ollama/blob/91cd54016c47b71223e8263c44250766874e05cf/app/src/index.ts#L175,L180 1) I'm not sure whether that should be the case by default (up for discussion) 2) I don't remember seeing a warning about this when installing the app (this might be debatable as well) 3) it would be great to have an option within the app to disable this Cheers
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/162/reactions", "total_count": 54, "+1": 52, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/162/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3315
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3315/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3315/comments
https://api.github.com/repos/ollama/ollama/issues/3315/events
https://github.com/ollama/ollama/pull/3315
2,204,023,930
PR_kwDOJ0Z1Ps5qkYjJ
3,315
Added [N,y] prompt to confirm the deletion of a model
{ "login": "Icelain", "id": 50962640, "node_id": "MDQ6VXNlcjUwOTYyNjQw", "avatar_url": "https://avatars.githubusercontent.com/u/50962640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Icelain", "html_url": "https://github.com/Icelain", "followers_url": "https://api.github.com/users/Icelain/followers", "following_url": "https://api.github.com/users/Icelain/following{/other_user}", "gists_url": "https://api.github.com/users/Icelain/gists{/gist_id}", "starred_url": "https://api.github.com/users/Icelain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Icelain/subscriptions", "organizations_url": "https://api.github.com/users/Icelain/orgs", "repos_url": "https://api.github.com/users/Icelain/repos", "events_url": "https://api.github.com/users/Icelain/events{/privacy}", "received_events_url": "https://api.github.com/users/Icelain/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2024-03-23T19:50:37
2024-03-31T20:40:16
2024-03-31T17:13:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3315", "html_url": "https://github.com/ollama/ollama/pull/3315", "diff_url": "https://github.com/ollama/ollama/pull/3315.diff", "patch_url": "https://github.com/ollama/ollama/pull/3315.patch", "merged_at": null }
Fixes #3108
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3315/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3315/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8520
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8520/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8520/comments
https://api.github.com/repos/ollama/ollama/issues/8520/events
https://github.com/ollama/ollama/issues/8520
2,802,264,444
I_kwDOJ0Z1Ps6nByl8
8,520
$OLLAMA_MODELS no longer respected?
{ "login": "yuimbo", "id": 83395410, "node_id": "MDQ6VXNlcjgzMzk1NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/83395410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuimbo", "html_url": "https://github.com/yuimbo", "followers_url": "https://api.github.com/users/yuimbo/followers", "following_url": "https://api.github.com/users/yuimbo/following{/other_user}", "gists_url": "https://api.github.com/users/yuimbo/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuimbo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuimbo/subscriptions", "organizations_url": "https://api.github.com/users/yuimbo/orgs", "repos_url": "https://api.github.com/users/yuimbo/repos", "events_url": "https://api.github.com/users/yuimbo/events{/privacy}", "received_events_url": "https://api.github.com/users/yuimbo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2025-01-21T16:12:04
2025-01-22T09:34:06
2025-01-22T09:33:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I've been using the OLLAMA_MODELS variable to store my models on an external drive. I can see that this is set: ``` $ echo $OLLAMA_MODELS /Volumes/bigdrive/ollama/models ``` I can see that my models are stored there: ``` $ tree $OLLAMA_MODELS /Volumes/bigdrive/ollama/models ├── blobs │   ├── sha256-0b4284c1f87029e67654c7953afa16279961632cf73dcfe33374c4c2f298fa35 │   ├── sha256-1ae29500b4be5bb4ce3981e3692ee8689ce5df0ae3080ed4c5ff9f72bf01ba6a │   ├── sha256-2eedb02591412148c2fea86b5896da88ec5cddea551bcccde3270aa9d1f048ff │   .... └── manifests └── registry.ollama.ai ├── alibayram │   └── erurollm-9b-instruct │   └── latest ├── huihui_ai │   ├── llama3.3-abliterated │   │   └── latest │   └── qwq-abliterated │   └── latest ├── library │   ├── deepseek-r1 │   │   ├── 32b │   │   └── 70b .... ``` But for some reason the models does not show up in ollama? ``` $ ollama list NAME ID SIZE MODIFIED ``` It seems the OLLAMA_MODELS variable is not being respected. If there are any debug flags I can use to inspect what's going wrong, let me know! ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.7
{ "login": "yuimbo", "id": 83395410, "node_id": "MDQ6VXNlcjgzMzk1NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/83395410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuimbo", "html_url": "https://github.com/yuimbo", "followers_url": "https://api.github.com/users/yuimbo/followers", "following_url": "https://api.github.com/users/yuimbo/following{/other_user}", "gists_url": "https://api.github.com/users/yuimbo/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuimbo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuimbo/subscriptions", "organizations_url": "https://api.github.com/users/yuimbo/orgs", "repos_url": "https://api.github.com/users/yuimbo/repos", "events_url": "https://api.github.com/users/yuimbo/events{/privacy}", "received_events_url": "https://api.github.com/users/yuimbo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8520/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7402
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7402/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7402/comments
https://api.github.com/repos/ollama/ollama/issues/7402/events
https://github.com/ollama/ollama/issues/7402
2,619,012,896
I_kwDOJ0Z1Ps6cGvcg
7,402
ollama run aya-expanse:32b gives nonsensical output
{ "login": "lefromage", "id": 757997, "node_id": "MDQ6VXNlcjc1Nzk5Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/757997?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lefromage", "html_url": "https://github.com/lefromage", "followers_url": "https://api.github.com/users/lefromage/followers", "following_url": "https://api.github.com/users/lefromage/following{/other_user}", "gists_url": "https://api.github.com/users/lefromage/gists{/gist_id}", "starred_url": "https://api.github.com/users/lefromage/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lefromage/subscriptions", "organizations_url": "https://api.github.com/users/lefromage/orgs", "repos_url": "https://api.github.com/users/lefromage/repos", "events_url": "https://api.github.com/users/lefromage/events{/privacy}", "received_events_url": "https://api.github.com/users/lefromage/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
3
2024-10-28T17:08:55
2024-10-28T21:17:57
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? aya-expanse 8b runs fine, but 32b produces nonsensical output as shown below ollama run aya-expanse:8b >>> hi Hello! How can I help you today? ollama run aya-expanse:32b >>> hi L<PAD>KJ<PAD>OLE6IEGU;F<B9DN:FM4VNOUSV7I=5<UNHBGUQTUR=GOG;<PAD>LRN<CLE<;7BV@>T:8ND5>>;<34<PAD>LR;C;D7M6<QO5UIOI7BBUG9:?<D6UM<SO:MVKA6OAUFIK67UD@L<PAD>KJ<PAD>OLE6IEGU;F<B9DN:FM4VNOUSV7I=5<UNHBGUQTUR=GOG;<PAD>LRN<CLE<;7BV@>T:8ND5>>;<34<PAD>LR;C;D7M6<QO5UIOI7BBUG9:?<D6UM<SO:MVKA6OAUFIK67UD@M67<PAD>JHD?N4:J? ..... keeps producing nonsensical output ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.14
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7402/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3189
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3189/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3189/comments
https://api.github.com/repos/ollama/ollama/issues/3189/events
https://github.com/ollama/ollama/issues/3189
2,190,445,483
I_kwDOJ0Z1Ps6Cj4ur
3,189
Add support for amd Radeon 780M gfx1103 - override works
{ "login": "thbley", "id": 941223, "node_id": "MDQ6VXNlcjk0MTIyMw==", "avatar_url": "https://avatars.githubusercontent.com/u/941223?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thbley", "html_url": "https://github.com/thbley", "followers_url": "https://api.github.com/users/thbley/followers", "following_url": "https://api.github.com/users/thbley/following{/other_user}", "gists_url": "https://api.github.com/users/thbley/gists{/gist_id}", "starred_url": "https://api.github.com/users/thbley/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thbley/subscriptions", "organizations_url": "https://api.github.com/users/thbley/orgs", "repos_url": "https://api.github.com/users/thbley/repos", "events_url": "https://api.github.com/users/thbley/events{/privacy}", "received_events_url": "https://api.github.com/users/thbley/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg", "url": "https://api.github.com/repos/ollama/ollama/labels/linux", "name": "linux", "color": "516E70", "default": false, "description": "" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA", "url": "https://api.github.com/repos/ollama/ollama/labels/amd", "name": "amd", "color": "000000", "default": false, "description": "Issues relating to AMD GPUs and ROCm" } ]
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
14
2024-03-17T02:29:13
2025-01-29T22:49:46
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? Please support GPU acceleration using "AMD Ryzen 7 PRO 7840U w/ Radeon 780M Graphics" on Linux (Ubuntu 22.04). Newer notebooks are shipped with AMD 7840U and support setting VRAM from 1GB to 8GB in the bios. With GPU acceleration only 1 vCPU is used and user experience with 7B models is quite good. Not working ("amdgpu [0] gfx1103 is not supported"): ``` OLLAMA_LLM_LIBRARY="rocm_v60000" /usr/bin/ollama serve time=2024-03-17T03:02:49.566+01:00 level=INFO source=images.go:806 msg="total blobs: 20" time=2024-03-17T03:02:49.566+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-17T03:02:49.566+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" time=2024-03-17T03:02:49.566+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama3117537729/runners ..." time=2024-03-17T03:02:51.423+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [rocm_v60000 cpu_avx cpu_avx2 cuda_v11 cpu]" time=2024-03-17T03:02:51.424+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-17T03:02:51.424+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-17T03:02:51.427+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" time=2024-03-17T03:02:51.427+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-17T03:02:51.427+01:00 level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-03-17T03:02:51.427+01:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1103]" time=2024-03-17T03:02:51.430+01:00 level=WARN source=amd_linux.go:114 msg="amdgpu [0] gfx1103 is not supported by /tmp/ollama3117537729/rocm [gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942]" time=2024-03-17T03:02:51.430+01:00 level=WARN source=amd_linux.go:116 msg="See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for HSA_OVERRIDE_GFX_VERSION usage" time=2024-03-17T03:02:51.430+01:00 level=INFO source=amd_linux.go:127 msg="all detected amdgpus are skipped, falling back to CPU" time=2024-03-17T03:02:51.430+01:00 level=INFO source=routes.go:1133 msg="no GPU detected" ``` My current workaround is to force gfx1102 (no issues so far): ``` HSA_OVERRIDE_GFX_VERSION="11.0.2" OLLAMA_LLM_LIBRARY="rocm_v60000" /usr/bin/ollama serve time=2024-03-17T03:04:54.436+01:00 level=INFO source=images.go:806 msg="total blobs: 20" time=2024-03-17T03:04:54.436+01:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-17T03:04:54.437+01:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" time=2024-03-17T03:04:54.437+01:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2902102366/runners ..." time=2024-03-17T03:04:56.315+01:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60000 cpu cpu_avx]" time=2024-03-17T03:04:56.315+01:00 level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-17T03:04:56.315+01:00 level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-17T03:04:56.317+01:00 level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" time=2024-03-17T03:04:56.317+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-17T03:04:56.317+01:00 level=INFO source=amd_linux.go:50 msg="AMD Driver: 6.3.6" time=2024-03-17T03:04:56.318+01:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1103]" time=2024-03-17T03:04:56.319+01:00 level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 8192M" time=2024-03-17T03:04:56.319+01:00 level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 8192M" ``` Note: using "rocm_v6" was not working for me, so I chose "rocm_v60000" (ls /tmp/ollama758986631/runners/ gives cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60000) Some benchmarks using 7840U (numbers from second run), ubuntu 22.04, kernel 6.5, vram switched to 8GB in bios: ``` > CPU OLLAMA_LLM_LIBRARY="cpu_avx2" /usr/bin/ollama serve ollama run llama2:latest "where was beethoven born?" --verbose Ludwig van Beethoven was born in Bonn, Germany on December 16, 1770. total duration: 4.343514826s load duration: 264.691µs prompt eval duration: 168.205ms prompt eval rate: 0.00 tokens/s eval count: 26 token(s) eval duration: 4.174563s eval rate: 6.23 tokens/s ``` ``` > GPU HSA_OVERRIDE_GFX_VERSION="11.0.2" OLLAMA_LLM_LIBRARY="rocm_v60000u_avx2" /usr/bin/ollama serve ollama run llama2:latest "where was beethoven born?" --verbose Ludwig van Beethoven was born in Bonn, Germany on December 16, 1770. total duration: 1.513455927s load duration: 161.535µs prompt eval duration: 65.979ms prompt eval rate: 0.00 tokens/s eval count: 27 token(s) eval duration: 1.446805s eval rate: 18.66 tokens/s ``` ### How should we solve this? _No response_ ### What is the impact of not solving this? _No response_ ### Anything else? _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3189/reactions", "total_count": 24, "+1": 24, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3189/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4717
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4717/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4717/comments
https://api.github.com/repos/ollama/ollama/issues/4717/events
https://github.com/ollama/ollama/issues/4717
2,325,127,482
I_kwDOJ0Z1Ps6KlqE6
4,717
phi3:medium-128k doesn't use the full context window by default
{ "login": "derluke", "id": 6739699, "node_id": "MDQ6VXNlcjY3Mzk2OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6739699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/derluke", "html_url": "https://github.com/derluke", "followers_url": "https://api.github.com/users/derluke/followers", "following_url": "https://api.github.com/users/derluke/following{/other_user}", "gists_url": "https://api.github.com/users/derluke/gists{/gist_id}", "starred_url": "https://api.github.com/users/derluke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/derluke/subscriptions", "organizations_url": "https://api.github.com/users/derluke/orgs", "repos_url": "https://api.github.com/users/derluke/repos", "events_url": "https://api.github.com/users/derluke/events{/privacy}", "received_events_url": "https://api.github.com/users/derluke/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-05-30T08:57:43
2024-05-30T16:21:44
2024-05-30T16:21:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I was playing with the new phi3:medium-128k model and was surprised to see it struggled to keep track of my earlier questions, or handle long documents. But on the bright side it was surprisingly fast. After a little digging I found out how to specify the context size using a new model file. I decided to give it a go and used this ``` FROM phi3:medium-128k TEMPLATE "{{ if .System }}<|system|> {{ .System }}<|end|> {{ end }}{{ if .Prompt }}<|user|> {{ .Prompt }}<|end|> {{ end }}<|assistant|> {{ .Response }}<|end|>" PARAMETER stop <|end|> PARAMETER stop <|user|> PARAMETER stop <|assistant|> PARAMETER num_ctx 65536 ``` (I wasn't sure exactly how much 128k is supposed to be (assume 2**17) so decided to be on the safe side and take the power of two below) And it worked, now the model can read long documents and behaves as expected (and is rather slow, but that is due to my limited hardware) It would be amazing if these models came pre-configured such that they use the full context window by default.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4717/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4717/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6177
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6177/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6177/comments
https://api.github.com/repos/ollama/ollama/issues/6177/events
https://github.com/ollama/ollama/issues/6177
2,448,758,778
I_kwDOJ0Z1Ps6R9Rf6
6,177
run OI with OLLAMA SERVER IN NETWORK
{ "login": "RM-S2", "id": 174100356, "node_id": "U_kgDOCmCPhA", "avatar_url": "https://avatars.githubusercontent.com/u/174100356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RM-S2", "html_url": "https://github.com/RM-S2", "followers_url": "https://api.github.com/users/RM-S2/followers", "following_url": "https://api.github.com/users/RM-S2/following{/other_user}", "gists_url": "https://api.github.com/users/RM-S2/gists{/gist_id}", "starred_url": "https://api.github.com/users/RM-S2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RM-S2/subscriptions", "organizations_url": "https://api.github.com/users/RM-S2/orgs", "repos_url": "https://api.github.com/users/RM-S2/repos", "events_url": "https://api.github.com/users/RM-S2/events{/privacy}", "received_events_url": "https://api.github.com/users/RM-S2/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-08-05T14:52:49
2024-08-08T15:48:38
2024-08-08T15:48:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? i try to run OI with ollama server in another server computer in my network.and the command start OI is: interpreter --model ollama/llama3.1 --api_base "http://192.168.3.13:11434" --api_key "fake_key" but i get error say cant fine ollama in my computer ,what i thought is wrong because OI shoud find ollama in my server computer. what should i do ? ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.32
{ "login": "RM-S2", "id": 174100356, "node_id": "U_kgDOCmCPhA", "avatar_url": "https://avatars.githubusercontent.com/u/174100356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RM-S2", "html_url": "https://github.com/RM-S2", "followers_url": "https://api.github.com/users/RM-S2/followers", "following_url": "https://api.github.com/users/RM-S2/following{/other_user}", "gists_url": "https://api.github.com/users/RM-S2/gists{/gist_id}", "starred_url": "https://api.github.com/users/RM-S2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RM-S2/subscriptions", "organizations_url": "https://api.github.com/users/RM-S2/orgs", "repos_url": "https://api.github.com/users/RM-S2/repos", "events_url": "https://api.github.com/users/RM-S2/events{/privacy}", "received_events_url": "https://api.github.com/users/RM-S2/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6177/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/156
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/156/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/156/comments
https://api.github.com/repos/ollama/ollama/issues/156/events
https://github.com/ollama/ollama/issues/156
1,815,137,426
I_kwDOJ0Z1Ps5sMMyS
156
Fine-tuning support
{ "login": "shrikrishnaholla", "id": 1164410, "node_id": "MDQ6VXNlcjExNjQ0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1164410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shrikrishnaholla", "html_url": "https://github.com/shrikrishnaholla", "followers_url": "https://api.github.com/users/shrikrishnaholla/followers", "following_url": "https://api.github.com/users/shrikrishnaholla/following{/other_user}", "gists_url": "https://api.github.com/users/shrikrishnaholla/gists{/gist_id}", "starred_url": "https://api.github.com/users/shrikrishnaholla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shrikrishnaholla/subscriptions", "organizations_url": "https://api.github.com/users/shrikrishnaholla/orgs", "repos_url": "https://api.github.com/users/shrikrishnaholla/repos", "events_url": "https://api.github.com/users/shrikrishnaholla/events{/privacy}", "received_events_url": "https://api.github.com/users/shrikrishnaholla/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
20
2023-07-21T04:33:31
2024-10-16T18:36:41
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
First of all, thanks for building this tool and releasing it as open source. I like that the interfaces seem similar to `docker`. I also like the idea of Modelfile. Maybe it could also be used to define a finetuning process. That would also allow making the build process be part of a CI/CD routine and would allow building private finetuned models with a good developer UX, which I'm sure lots of people are looking for presently.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/156/reactions", "total_count": 32, "+1": 27, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/156/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4433
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4433/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4433/comments
https://api.github.com/repos/ollama/ollama/issues/4433/events
https://github.com/ollama/ollama/issues/4433
2,296,066,016
I_kwDOJ0Z1Ps6I2y_g
4,433
GPU layer control / prioritisation
{ "login": "AncientMystic", "id": 62780271, "node_id": "MDQ6VXNlcjYyNzgwMjcx", "avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AncientMystic", "html_url": "https://github.com/AncientMystic", "followers_url": "https://api.github.com/users/AncientMystic/followers", "following_url": "https://api.github.com/users/AncientMystic/following{/other_user}", "gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}", "starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions", "organizations_url": "https://api.github.com/users/AncientMystic/orgs", "repos_url": "https://api.github.com/users/AncientMystic/repos", "events_url": "https://api.github.com/users/AncientMystic/events{/privacy}", "received_events_url": "https://api.github.com/users/AncientMystic/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-05-14T18:05:59
2024-05-14T18:05:59
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Would it be possible to add into the configuration of ollama something similar to LM studio to control the gpu utilisation? Also would it be possible to fine tune ollama to somehow only load certain layers to the gpu similar to unsloth? Possibly a way to load accessed and adjacent layers maybe with configuration on how many adjacent layers/how much of the model to load at once and either offload the unused layers to ram or not load them at all and just swap out loading layers when needed instead of just loading the entire model every time Could maybe have add it as lazy loading or something to enable the usage of larger models at higher performance It seems to have a significant performance advantage especially on lower hardware for those of us without extreme setups if possible within ollama at least
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4433/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4433/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1529
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1529/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1529/comments
https://api.github.com/repos/ollama/ollama/issues/1529/events
https://github.com/ollama/ollama/pull/1529
2,042,606,963
PR_kwDOJ0Z1Ps5iDRk6
1,529
README with Enchanted iOS App
{ "login": "gluonfield", "id": 5672094, "node_id": "MDQ6VXNlcjU2NzIwOTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5672094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gluonfield", "html_url": "https://github.com/gluonfield", "followers_url": "https://api.github.com/users/gluonfield/followers", "following_url": "https://api.github.com/users/gluonfield/following{/other_user}", "gists_url": "https://api.github.com/users/gluonfield/gists{/gist_id}", "starred_url": "https://api.github.com/users/gluonfield/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gluonfield/subscriptions", "organizations_url": "https://api.github.com/users/gluonfield/orgs", "repos_url": "https://api.github.com/users/gluonfield/repos", "events_url": "https://api.github.com/users/gluonfield/events{/privacy}", "received_events_url": "https://api.github.com/users/gluonfield/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-12-14T22:35:25
2023-12-15T19:37:29
2023-12-15T19:37:29
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1529", "html_url": "https://github.com/ollama/ollama/pull/1529", "diff_url": "https://github.com/ollama/ollama/pull/1529.diff", "patch_url": "https://github.com/ollama/ollama/pull/1529.patch", "merged_at": "2023-12-15T19:37:29" }
I have just released iOS mobile App for Ollama and wanted to share with the community. A lot of improvements are coming up soon.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1529/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1529/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4309
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4309/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4309/comments
https://api.github.com/repos/ollama/ollama/issues/4309/events
https://github.com/ollama/ollama/issues/4309
2,288,939,443
I_kwDOJ0Z1Ps6IbnGz
4,309
I have uploaded this model, but it is not shown on my page.
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/taozhiyuai/followers", "following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}", "gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}", "starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions", "organizations_url": "https://api.github.com/users/taozhiyuai/orgs", "repos_url": "https://api.github.com/users/taozhiyuai/repos", "events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}", "received_events_url": "https://api.github.com/users/taozhiyuai/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6573197867, "node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw", "url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com", "name": "ollama.com", "color": "ffffff", "default": false, "description": "" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }, { "login": "hoyyeva", "id": 63033505, "node_id": "MDQ6VXNlcjYzMDMzNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hoyyeva", "html_url": "https://github.com/hoyyeva", "followers_url": "https://api.github.com/users/hoyyeva/followers", "following_url": "https://api.github.com/users/hoyyeva/following{/other_user}", "gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}", "starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions", "organizations_url": "https://api.github.com/users/hoyyeva/orgs", "repos_url": "https://api.github.com/users/hoyyeva/repos", "events_url": "https://api.github.com/users/hoyyeva/events{/privacy}", "received_events_url": "https://api.github.com/users/hoyyeva/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-05-10T05:06:12
2024-05-10T08:52:15
2024-05-10T08:52:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? <img width="1091" alt="截屏2024-05-10 13 00 11" src="https://github.com/ollama/ollama/assets/146583103/f809d253-4deb-4224-99f8-3a20501ad869"> I have uploaded this model, but it is not shown on my page. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 1.34
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/taozhiyuai/followers", "following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}", "gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}", "starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions", "organizations_url": "https://api.github.com/users/taozhiyuai/orgs", "repos_url": "https://api.github.com/users/taozhiyuai/repos", "events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}", "received_events_url": "https://api.github.com/users/taozhiyuai/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4309/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/705
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/705/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/705/comments
https://api.github.com/repos/ollama/ollama/issues/705/events
https://github.com/ollama/ollama/pull/705
1,927,045,435
PR_kwDOJ0Z1Ps5b8evF
705
Fix go test./... issue: fmt.Println arg list ends with redundant newline
{ "login": "xyproto", "id": 52813, "node_id": "MDQ6VXNlcjUyODEz", "avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xyproto", "html_url": "https://github.com/xyproto", "followers_url": "https://api.github.com/users/xyproto/followers", "following_url": "https://api.github.com/users/xyproto/following{/other_user}", "gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}", "starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xyproto/subscriptions", "organizations_url": "https://api.github.com/users/xyproto/orgs", "repos_url": "https://api.github.com/users/xyproto/repos", "events_url": "https://api.github.com/users/xyproto/events{/privacy}", "received_events_url": "https://api.github.com/users/xyproto/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-10-04T22:02:22
2023-10-05T20:09:41
2023-10-05T15:11:05
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/705", "html_url": "https://github.com/ollama/ollama/pull/705", "diff_url": "https://github.com/ollama/ollama/pull/705.diff", "patch_url": "https://github.com/ollama/ollama/pull/705.patch", "merged_at": "2023-10-05T15:11:05" }
`go test ./...` currently fails with: ``` # github.com/jmorganca/ollama/cmd cmd/cmd.go:690:7: fmt.Println arg list ends with redundant newline cmd/cmd.go:698:7: fmt.Println arg list ends with redundant newline cmd/cmd.go:704:7: fmt.Println arg list ends with redundant newline cmd/cmd.go:710:7: fmt.Println arg list ends with redundant newline ? github.com/jmorganca/ollama [no test files] ? github.com/jmorganca/ollama/api [no test files] ? github.com/jmorganca/ollama/llm [no test files] ? github.com/jmorganca/ollama/llm/llama.cpp [no test files] ? github.com/jmorganca/ollama/parser [no test files] ? github.com/jmorganca/ollama/progressbar [no test files] ok github.com/jmorganca/ollama/format 0.003s ? github.com/jmorganca/ollama/vector [no test files] ? github.com/jmorganca/ollama/version [no test files] ok github.com/jmorganca/ollama/server 0.005s FAIL ``` This commit changes fmt.Println to just fmt.Print so that `go test ./...` passes: ``` ? github.com/jmorganca/ollama [no test files] ? github.com/jmorganca/ollama/api [no test files] ? github.com/jmorganca/ollama/cmd [no test files] ? github.com/jmorganca/ollama/llm [no test files] ? github.com/jmorganca/ollama/llm/llama.cpp [no test files] ? github.com/jmorganca/ollama/parser [no test files] ? github.com/jmorganca/ollama/progressbar [no test files] ok github.com/jmorganca/ollama/format (cached) ? github.com/jmorganca/ollama/vector [no test files] ? github.com/jmorganca/ollama/version [no test files] ok github.com/jmorganca/ollama/server (cached) ``` I'm on Arch Linux using Go 1.21.1 linux/amd64.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/705/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5675
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5675/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5675/comments
https://api.github.com/repos/ollama/ollama/issues/5675/events
https://github.com/ollama/ollama/pull/5675
2,407,007,425
PR_kwDOJ0Z1Ps51TA0O
5,675
Add Kerlig AI, an app for macOS
{ "login": "Jaarson", "id": 16690523, "node_id": "MDQ6VXNlcjE2NjkwNTIz", "avatar_url": "https://avatars.githubusercontent.com/u/16690523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jaarson", "html_url": "https://github.com/Jaarson", "followers_url": "https://api.github.com/users/Jaarson/followers", "following_url": "https://api.github.com/users/Jaarson/following{/other_user}", "gists_url": "https://api.github.com/users/Jaarson/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jaarson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jaarson/subscriptions", "organizations_url": "https://api.github.com/users/Jaarson/orgs", "repos_url": "https://api.github.com/users/Jaarson/repos", "events_url": "https://api.github.com/users/Jaarson/events{/privacy}", "received_events_url": "https://api.github.com/users/Jaarson/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-07-13T15:16:17
2024-07-13T15:33:47
2024-07-13T15:33:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5675", "html_url": "https://github.com/ollama/ollama/pull/5675", "diff_url": "https://github.com/ollama/ollama/pull/5675.diff", "patch_url": "https://github.com/ollama/ollama/pull/5675.patch", "merged_at": "2024-07-13T15:33:46" }
null
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5675/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3600
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3600/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3600/comments
https://api.github.com/repos/ollama/ollama/issues/3600/events
https://github.com/ollama/ollama/pull/3600
2,238,285,742
PR_kwDOJ0Z1Ps5sZATO
3,600
mixtral mem
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-04-11T18:10:55
2024-04-11T19:23:38
2024-04-11T19:23:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3600", "html_url": "https://github.com/ollama/ollama/pull/3600", "diff_url": "https://github.com/ollama/ollama/pull/3600.diff", "patch_url": "https://github.com/ollama/ollama/pull/3600.patch", "merged_at": "2024-04-11T19:23:37" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3600/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7778
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7778/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7778/comments
https://api.github.com/repos/ollama/ollama/issues/7778/events
https://github.com/ollama/ollama/issues/7778
2,679,493,210
I_kwDOJ0Z1Ps6ftdJa
7,778
tool_choice parameter
{ "login": "nicho2", "id": 11471811, "node_id": "MDQ6VXNlcjExNDcxODEx", "avatar_url": "https://avatars.githubusercontent.com/u/11471811?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nicho2", "html_url": "https://github.com/nicho2", "followers_url": "https://api.github.com/users/nicho2/followers", "following_url": "https://api.github.com/users/nicho2/following{/other_user}", "gists_url": "https://api.github.com/users/nicho2/gists{/gist_id}", "starred_url": "https://api.github.com/users/nicho2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nicho2/subscriptions", "organizations_url": "https://api.github.com/users/nicho2/orgs", "repos_url": "https://api.github.com/users/nicho2/repos", "events_url": "https://api.github.com/users/nicho2/events{/privacy}", "received_events_url": "https://api.github.com/users/nicho2/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
4
2024-11-21T13:31:30
2024-12-11T08:58:27
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello, I use the model **Mixtral 8*22B Q4_0.** I want to use **function calling** but the model don't send very good the tool to call (2 times / 10 (in tool_call tag)) so i add the parameter : "tool_choice": "required", but it's seems have no effect . Is this capacity is take into account in ollama ? what doing else? example frame sending : {"messages": [{"content": "\nTu es un assistant d\u00e9di\u00e9 \u00e0 supporter un utilisateur dans sa compr\u00e9hension des donn\u00e9es issues des espaces d'un b\u00e2timent. \nVotre r\u00f4le principal est de r\u00e9pondre pr\u00e9cis\u00e9ment aux demandes de l'utilisateur en coordonnant des agents sp\u00e9cialis\u00e9s, chacun ayant des comp\u00e9tences sp\u00e9cifiques pour r\u00e9cup\u00e9rer et analyser les donn\u00e9es.\n\n\u00c9tapes pour traiter la demande de l'utilisateur :\n 1. Comprendre la demande de l'utilisateur, identifier les informations manquantes et les compl\u00e9ter par toi-m\u00eame. Une demande peut te paraitre incompl\u00e8te mais seul l'agent concern\u00e9 est en capacit\u00e9 de l'exprimer.\n<example>\n- l'utilisateur demande: \"quelles sont les statistiques au workcafe ce matin?\". l'agent sp\u00e9cialis\u00e9 charg\u00e9 de r\u00e9cup\u00e9rer les donn\u00e9es sait que statistiques correspond au besoin d'avoir les statistiques de toutes les m\u00e9triques de l'espace concern\u00e9. \n</exemple>\n 2. V\u00e9rifier les droits d'acc\u00e8s de l'utilisateur \u00e0 l'espace concern\u00e9.\n 3. Planifier la meilleure d\u00e9marche en d\u00e9terminant quels agents sp\u00e9cialis\u00e9s solliciter.\n 4. D\u00e9l\u00e9guer les t\u00e2ches aux agents appropri\u00e9s via les appels de fonction correspondants.\n 5. Rassembler les r\u00e9ponses des agents et fournir une r\u00e9ponse coh\u00e9rente \u00e0 l'utilisateur.\n\nTu d\u00e9l\u00e8gues la t\u00e2che \u00e0 l'assistant sp\u00e9cialis\u00e9 appropri\u00e9 en invoquant l'outil correspondant parmi ceux disponibles.\nSeuls les assistants sp\u00e9cialis\u00e9s sont autoris\u00e9s \u00e0 le faire pour l'utilisateur. L'utilisateur n'est pas au courant des diff\u00e9rents assistants sp\u00e9cialis\u00e9s, ne les mentionnez donc pas;\nD\u00e9l\u00e9guez simplement discr\u00e8tement via des appels de fonction. Assurez-vous de respecter la confidentialit\u00e9 des informations de l'utilisateur. Ne divulguez pas d'informations sensibles et conformez-vous aux politiques de protection des donn\u00e9es.\nApr\u00e8s avoir fourni une r\u00e9ponse, demandez \u00e0 l'utilisateur s'il a besoin d'informations suppl\u00e9mentaires ou d'assistance suppl\u00e9mentaire. \n\n\nCurrent user:\n<User>\n{'user_id': '3'}\n</User> \nCurrent time: 2024-11-21 14:08:44.800498.\n", "role": "system"}, {"content": "quelles sont les statistiques au workcafe ce matin", "role": "user"}], "model": "mixtral:8x22b", "n": 1, "stream": false, "temperature": 0.3, "tool_choice": "required", "tools": [{"type": "function", "function": {"name": "tavily_search_results_json", "description": "A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events. Input should be a search query.", "parameters": {"properties": {"query": {"description": "search query to look up", "type": "string"}}, "required": ["query"], "type": "object"}}}, {"type": "function", "function": {"name": "ToAccesRightAgent", "description": "Transfert du travail \u00e0 un agent sp\u00e9cialis\u00e9 dans la gestion des droits d'acc\u00e8s.", "parameters": {"properties": {"request": {"description": "Toutes les questions concernant les droits d'acc\u00e8s de l'utilisateur aux diff\u00e9rents espaces du b\u00e2timent.", "type": "string"}, "username": {"anyOf": [{"type": "string"}, {"type": "null"}], "description": "le login utilisateur"}, "user_id": {"description": "le user_id de l'utilisateur", "type": "string"}}, "required": ["request", "username", "user_id"], "type": "object"}}}, {"type": "function", "function": {"name": "ToDatasFetchingAgent", "description": "Transfert du travail \u00e0 un agent sp\u00e9cialis\u00e9 dans la r\u00e9cup\u00e9ration des donn\u00e9es.", "parameters": {"properties": {"request": {"description": "Toutes les questions n\u00e9cessitant la r\u00e9cup\u00e9ration des donn\u00e9es d'espaces du b\u00e2timent.", "type": "string"}}, "required": ["request"], "type": "object"}}}, {"type": "function", "function": {"name": "ToKpiPoliciesAgent", "description": "Transfert du travail \u00e0 un agent sp\u00e9cialis\u00e9 dans la r\u00e9cup\u00e9ration de r\u00e8gles permettant l'interpr\u00e9tation des donn\u00e9es.", "parameters": {"properties": {"request": {"description": "Toutes les questions li\u00e9es \u00e0 l'interpr\u00e9tation des donn\u00e9es d'espaces du b\u00e2timent.", "type": "string"}}, "required": ["request"], "type": "object"}}}]} log ollama: time=2024-11-21T13:09:00.410Z level=DEBUG source=routes.go:1457 msg="chat request" images=0 prompt="[AVAILABLE_TOOLS] [{\"type\":\"function\",\"function\":{\"name\":\"tavily_search_results_json\",\"description\":\"A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events. Input should be a search query.\",\"parameters\":{\"type\":\"object\",\"required\":[\"query\"],\"properties\":{\"query\":{\"type\":\"string\",\"description\":\"search query to look up\"}}}}},{\"type\":\"function\",\"function\":{\"name\":\"ToAccesRightAgent\",\"description\":\"Transfert du travail à un agent spécialisé dans la gestion des droits d'accès.\",\"parameters\":{\"type\":\"object\",\"required\":[\"request\",\"username\",\"user_id\"],\"properties\":{\"request\":{\"type\":\"string\",\"description\":\"Toutes les questions concernant les droits d'accès de l'utilisateur aux différents espaces du bâtiment.\"},\"user_id\":{\"type\":\"string\",\"description\":\"le user_id de l'utilisateur\"},\"username\":{\"type\":\"\",\"description\":\"le login utilisateur\"}}}}},{\"type\":\"function\",\"function\":{\"name\":\"ToDatasFetchingAgent\",\"description\":\"Transfert du travail à un agent spécialisé dans la récupération des données.\",\"parameters\":{\"type\":\"object\",\"required\":[\"request\"],\"properties\":{\"request\":{\"type\":\"string\",\"description\":\"Toutes les questions nécessitant la récupération des données d'espaces du bâtiment.\"}}}}},{\"type\":\"function\",\"function\":{\"name\":\"ToKpiPoliciesAgent\",\"description\":\"Transfert du travail à un agent spécialisé dans la récupération de règles permettant l'interprétation des données.\",\"parameters\":{\"type\":\"object\",\"required\":[\"request\"],\"properties\":{\"request\":{\"type\":\"string\",\"description\":\"Toutes les questions liées à l'interprétation des données d'espaces du bâtiment.\"}}}}}][/AVAILABLE_TOOLS][INST] Tu es un assistant dédié à supporter un utilisateur dans sa compréhension des données issues des espaces d'un bâtiment. \nVotre rôle principal est de répondre précisément aux demandes de l'utilisateur en coordonnant des agents spécialisés, chacun ayant des compétences spécifiques pour récupérer et analyser les données.\n\nÉtapes pour traiter la demande de l'utilisateur :\n 1. Comprendre la demande de l'utilisateur, identifier les informations manquantes et les compléter par toi-même. Une demande peut te paraitre incomplète mais seul l'agent concerné est en capacité de l'exprimer.\n<example>\n- l'utilisateur demande: \"quelles sont les statistiques au workcafe ce matin?\". l'agent spécialisé chargé de récupérer les données sait que statistiques correspond au besoin d'avoir les statistiques de toutes les métriques de l'espace concerné. \n</exemple>\n 2. Vérifier les droits d'accès de l'utilisateur à l'espace concerné.\n 3. Planifier la meilleure démarche en déterminant quels agents spécialisés solliciter.\n 4. Déléguer les tâches aux agents appropriés via les appels de fonction correspondants.\n 5. Rassembler les réponses des agents et fournir une réponse cohérente à l'utilisateur.\n\nTu délègues la tâche à l'assistant spécialisé approprié en invoquant l'outil correspondant parmi ceux disponibles.\nSeuls les assistants spécialisés sont autorisés à le faire pour l'utilisateur. L'utilisateur n'est pas au courant des différents assistants spécialisés, ne les mentionnez donc pas;\nDéléguez simplement discrètement via des appels de fonction. Assurez-vous de respecter la confidentialité des informations de l'utilisateur. Ne divulguez pas d'informations sensibles et conformez-vous aux politiques de protection des données.\nAprès avoir fourni une réponse, demandez à l'utilisateur s'il a besoin d'informations supplémentaires ou d'assistance supplémentaire. \n\n\nCurrent user:\n<User>\n{'user_id': '3'}\n</User> \nCurrent time: 2024-11-21 14:08:44.800498.\n\nquelles sont les statistiques au workcafe ce matin[/INST]" answer: {"id":"chatcmpl-676","object":"chat.completion","created":1732194556,"model":"mixtral:8x22b","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":" Compr..hension de la demande : L'utilisateur souhaite conna..tre les statistiques du Workcaf.. pour le matin en cours. Il s'agit probablement des m..triques telles que l'occupation, la temp..rature, etc.\n\nV..rification des droits d'acc..s : Pour v..rifier si l'utilisateur a les droits n..cessaires pour acc..der aux donn..es du Workcaf.., j'appelle ToAccesRightAgent avec les informations suivantes :\n```json\n{\n \"request\": \"V..rification des droits d'acc..s de l'utilisateur au Workcaf..\",\n \"username\": \"\u003cUser_Name\u003e\",\n \"user_id\": \"3\"\n}\n```\nPlanifier la meilleure d..marche : Pour r..cup..rer les statistiques du Workcaf.., je dois solliciter l'agent sp..cialis.. dans la r..cup..ration des donn..es. Je vais donc appeler ToDatasFetchingAgent avec la demande suivante :\n```json\n{\n \"request\": \"R..cup..rer les statistiques du Workcaf.. pour le matin en cours\"\n}\n```\nApr..s avoir re..u les donn..es, je devrai ..galement solliciter l'agent sp..cialis.. dans la r..cup..ration de r..gles permettant d'interpr..ter ces donn..es. Je vais donc appeler ToKpiPoliciesAgent avec la demande suivante :\n```json\n{\n \"request\": \"R..cup..rer les r..gles pour interpr..ter les statistiques du Workcaf..\"\n}\n```\nUne fois que j'aurai re..u toutes ces informations, je fournirai une r..ponse coh..rente .. l'utilisateur."},"finish_reason":"stop"}],"usage":{"prompt_tokens":1087,"completion_tokens":421,"total_tokens":1508}} ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.2
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7778/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7778/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4495
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4495/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4495/comments
https://api.github.com/repos/ollama/ollama/issues/4495/events
https://github.com/ollama/ollama/issues/4495
2,302,215,298
I_kwDOJ0Z1Ps6JOQSC
4,495
gemma 2.0
{ "login": "olumolu", "id": 162728301, "node_id": "U_kgDOCbMJbQ", "avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olumolu", "html_url": "https://github.com/olumolu", "followers_url": "https://api.github.com/users/olumolu/followers", "following_url": "https://api.github.com/users/olumolu/following{/other_user}", "gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}", "starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/olumolu/subscriptions", "organizations_url": "https://api.github.com/users/olumolu/orgs", "repos_url": "https://api.github.com/users/olumolu/repos", "events_url": "https://api.github.com/users/olumolu/events{/privacy}", "received_events_url": "https://api.github.com/users/olumolu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-05-17T09:16:33
2024-07-10T18:03:19
2024-07-10T18:03:19
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4495/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4495/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7400
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7400/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7400/comments
https://api.github.com/repos/ollama/ollama/issues/7400/events
https://github.com/ollama/ollama/issues/7400
2,618,727,343
I_kwDOJ0Z1Ps6cFpuv
7,400
Creating embeddings using the REST API is much slower than performing the same operation using Sentence Transformers
{ "login": "sebovzeoueb", "id": 7989595, "node_id": "MDQ6VXNlcjc5ODk1OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7989595?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sebovzeoueb", "html_url": "https://github.com/sebovzeoueb", "followers_url": "https://api.github.com/users/sebovzeoueb/followers", "following_url": "https://api.github.com/users/sebovzeoueb/following{/other_user}", "gists_url": "https://api.github.com/users/sebovzeoueb/gists{/gist_id}", "starred_url": "https://api.github.com/users/sebovzeoueb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sebovzeoueb/subscriptions", "organizations_url": "https://api.github.com/users/sebovzeoueb/orgs", "repos_url": "https://api.github.com/users/sebovzeoueb/repos", "events_url": "https://api.github.com/users/sebovzeoueb/events{/privacy}", "received_events_url": "https://api.github.com/users/sebovzeoueb/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng", "url": "https://api.github.com/repos/ollama/ollama/labels/performance", "name": "performance", "color": "A5B5C6", "default": false, "description": "" } ]
open
false
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users/jessegross/followers", "following_url": "https://api.github.com/users/jessegross/following{/other_user}", "gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}", "starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jessegross/subscriptions", "organizations_url": "https://api.github.com/users/jessegross/orgs", "repos_url": "https://api.github.com/users/jessegross/repos", "events_url": "https://api.github.com/users/jessegross/events{/privacy}", "received_events_url": "https://api.github.com/users/jessegross/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users/jessegross/followers", "following_url": "https://api.github.com/users/jessegross/following{/other_user}", "gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}", "starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jessegross/subscriptions", "organizations_url": "https://api.github.com/users/jessegross/orgs", "repos_url": "https://api.github.com/users/jessegross/repos", "events_url": "https://api.github.com/users/jessegross/events{/privacy}", "received_events_url": "https://api.github.com/users/jessegross/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
11
2024-10-28T15:13:57
2024-10-30T16:44:52
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm working on a RAG written in Python, and we're using ollama as the chatbot LLM provider. It's running in a Docker container and the Python app makes REST API calls to it. We have so far been using Sentence Transformers to create embeddings for documents that get ingested into the RAG and the user's query, however it would be great to ditch this dependency as it adds a bit of startup time and a lot of package dependencies that take up disk space. Since the embed API now supports batching, I've run a test using the existing Sentence Transformers code, and equivalent code (same vectors, same model) using the embed route from the ollama Docker container and it's about 2x slower. Even though I guess making an HTTP request slows things down a little, I can't imagine the overhead should be that much? Any way we can make it faster so we can get rid of other dependencies and use ollama for all our LLM related needs? ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.14
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7400/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1395
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1395/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1395/comments
https://api.github.com/repos/ollama/ollama/issues/1395/events
https://github.com/ollama/ollama/issues/1395
2,027,411,394
I_kwDOJ0Z1Ps5419fC
1,395
Model filenames (are incompatible with other programs)
{ "login": "marco-trovato", "id": 18162107, "node_id": "MDQ6VXNlcjE4MTYyMTA3", "avatar_url": "https://avatars.githubusercontent.com/u/18162107?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marco-trovato", "html_url": "https://github.com/marco-trovato", "followers_url": "https://api.github.com/users/marco-trovato/followers", "following_url": "https://api.github.com/users/marco-trovato/following{/other_user}", "gists_url": "https://api.github.com/users/marco-trovato/gists{/gist_id}", "starred_url": "https://api.github.com/users/marco-trovato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marco-trovato/subscriptions", "organizations_url": "https://api.github.com/users/marco-trovato/orgs", "repos_url": "https://api.github.com/users/marco-trovato/repos", "events_url": "https://api.github.com/users/marco-trovato/events{/privacy}", "received_events_url": "https://api.github.com/users/marco-trovato/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-12-06T00:42:55
2023-12-06T01:16:26
2023-12-06T01:16:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I already have a folder with several LLM models, each one can be 20-40 GB. ollama is unable to load them, i have to pull them again one by one, and they will so they will get saved by ollama according to their HASH, i.e.: `.../ollama_models/blobs/sha256:843d506c69eed7ece9a1584965be88421d9774a82bffd59e992d5a73eac2dee0` Of course I could run (it works): `python3 ./koboldcpp.py --useclblast 0 0 --gpulayers 20 --model /mypath/ollama_models/blobs/sha256:843d506c69eed7ece9a1584965be88421d9774a82bffd59e992d5a73eac2dee0` But am I am supposed to save a second copy of all those 200 GB files just because the filename are expected like this by ollama? PROPOSED SOLUTION: **Please save the filenames as their model name, i.e.: "wizardcoder_34b-python-q4_K_M.bin"** since everyone has a "LLM-models" folder already full of huge files
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1395/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1395/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5887
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5887/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5887/comments
https://api.github.com/repos/ollama/ollama/issues/5887/events
https://github.com/ollama/ollama/pull/5887
2,426,042,598
PR_kwDOJ0Z1Ps52Qr7q
5,887
cmd/server: utilizing OS copy to transfer blobs if the server is local
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/joshyan1/followers", "following_url": "https://api.github.com/users/joshyan1/following{/other_user}", "gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}", "starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions", "organizations_url": "https://api.github.com/users/joshyan1/orgs", "repos_url": "https://api.github.com/users/joshyan1/repos", "events_url": "https://api.github.com/users/joshyan1/events{/privacy}", "received_events_url": "https://api.github.com/users/joshyan1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-07-23T20:13:08
2024-11-21T18:22:06
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5887", "html_url": "https://github.com/ollama/ollama/pull/5887", "diff_url": "https://github.com/ollama/ollama/pull/5887.diff", "patch_url": "https://github.com/ollama/ollama/pull/5887.patch", "merged_at": null }
This PR looks to utilize local copies to a local server prior to posting the blob through the server
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5887/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3062
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3062/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3062/comments
https://api.github.com/repos/ollama/ollama/issues/3062/events
https://github.com/ollama/ollama/issues/3062
2,179,912,174
I_kwDOJ0Z1Ps6B7tHu
3,062
Ubuntu: Snap installation
{ "login": "MartinsRepo", "id": 10252728, "node_id": "MDQ6VXNlcjEwMjUyNzI4", "avatar_url": "https://avatars.githubusercontent.com/u/10252728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MartinsRepo", "html_url": "https://github.com/MartinsRepo", "followers_url": "https://api.github.com/users/MartinsRepo/followers", "following_url": "https://api.github.com/users/MartinsRepo/following{/other_user}", "gists_url": "https://api.github.com/users/MartinsRepo/gists{/gist_id}", "starred_url": "https://api.github.com/users/MartinsRepo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MartinsRepo/subscriptions", "organizations_url": "https://api.github.com/users/MartinsRepo/orgs", "repos_url": "https://api.github.com/users/MartinsRepo/repos", "events_url": "https://api.github.com/users/MartinsRepo/events{/privacy}", "received_events_url": "https://api.github.com/users/MartinsRepo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-03-11T18:26:44
2024-03-12T01:50:19
2024-03-12T01:50:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Installing Ollama with: sudo snap install ollama --beta is working correctly. Ollama list is showing it'working. Changing the default folder with: sudo snap set ollama models=/path to my new ollama model storage/ is accepted. Another ollama list gives: Error: could not connect to ollama app, is it running? After a restart of the system the same effect.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3062/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4586
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4586/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4586/comments
https://api.github.com/repos/ollama/ollama/issues/4586/events
https://github.com/ollama/ollama/issues/4586
2,312,025,800
I_kwDOJ0Z1Ps6JzrbI
4,586
Installation path issue
{ "login": "SnowWindDancing", "id": 60132911, "node_id": "MDQ6VXNlcjYwMTMyOTEx", "avatar_url": "https://avatars.githubusercontent.com/u/60132911?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SnowWindDancing", "html_url": "https://github.com/SnowWindDancing", "followers_url": "https://api.github.com/users/SnowWindDancing/followers", "following_url": "https://api.github.com/users/SnowWindDancing/following{/other_user}", "gists_url": "https://api.github.com/users/SnowWindDancing/gists{/gist_id}", "starred_url": "https://api.github.com/users/SnowWindDancing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SnowWindDancing/subscriptions", "organizations_url": "https://api.github.com/users/SnowWindDancing/orgs", "repos_url": "https://api.github.com/users/SnowWindDancing/repos", "events_url": "https://api.github.com/users/SnowWindDancing/events{/privacy}", "received_events_url": "https://api.github.com/users/SnowWindDancing/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-05-23T06:02:13
2024-05-23T17:48:52
2024-05-23T17:48:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I hope to specify the installation directory for the Windows version installation
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4586/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1890
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1890/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1890/comments
https://api.github.com/repos/ollama/ollama/issues/1890/events
https://github.com/ollama/ollama/issues/1890
2,074,014,980
I_kwDOJ0Z1Ps57nvUE
1,890
A way to update all downloaded models
{ "login": "Zig1375", "id": 2699034, "node_id": "MDQ6VXNlcjI2OTkwMzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2699034?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zig1375", "html_url": "https://github.com/Zig1375", "followers_url": "https://api.github.com/users/Zig1375/followers", "following_url": "https://api.github.com/users/Zig1375/following{/other_user}", "gists_url": "https://api.github.com/users/Zig1375/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zig1375/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zig1375/subscriptions", "organizations_url": "https://api.github.com/users/Zig1375/orgs", "repos_url": "https://api.github.com/users/Zig1375/repos", "events_url": "https://api.github.com/users/Zig1375/events{/privacy}", "received_events_url": "https://api.github.com/users/Zig1375/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
13
2024-01-10T10:03:21
2024-12-06T21:38:59
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'd like to have a way to update all downloaded models. Right now I have to pull each model separately.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1890/reactions", "total_count": 13, "+1": 12, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/1890/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6675
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6675/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6675/comments
https://api.github.com/repos/ollama/ollama/issues/6675/events
https://github.com/ollama/ollama/pull/6675
2,510,033,036
PR_kwDOJ0Z1Ps56pVKS
6,675
Bugfix for #6656 (Fixed redirect check if direct URL is already Present)
{ "login": "Tobix99", "id": 22603015, "node_id": "MDQ6VXNlcjIyNjAzMDE1", "avatar_url": "https://avatars.githubusercontent.com/u/22603015?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tobix99", "html_url": "https://github.com/Tobix99", "followers_url": "https://api.github.com/users/Tobix99/followers", "following_url": "https://api.github.com/users/Tobix99/following{/other_user}", "gists_url": "https://api.github.com/users/Tobix99/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tobix99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tobix99/subscriptions", "organizations_url": "https://api.github.com/users/Tobix99/orgs", "repos_url": "https://api.github.com/users/Tobix99/repos", "events_url": "https://api.github.com/users/Tobix99/events{/privacy}", "received_events_url": "https://api.github.com/users/Tobix99/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
4
2024-09-06T09:49:49
2024-12-29T19:40:49
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6675", "html_url": "https://github.com/ollama/ollama/pull/6675", "diff_url": "https://github.com/ollama/ollama/pull/6675.diff", "patch_url": "https://github.com/ollama/ollama/pull/6675.patch", "merged_at": null }
Sorry, this is a Bugfix for my old PR from [yesterday](https://github.com/ollama/ollama/pull/6656#issue-2507674135). I hadn't tested it thoroughly yesterday and noticed another bug in the logic. With this new logic it should return - the requestURL on Status OK - the Redirect URL on Status TemporaryRedirect - and an error on an unexpected status code I've tested it now with a local registry and it pulls the image. This PR should close #6308.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6675/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6675/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2441
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2441/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2441/comments
https://api.github.com/repos/ollama/ollama/issues/2441/events
https://github.com/ollama/ollama/pull/2441
2,128,273,316
PR_kwDOJ0Z1Ps5mimt0
2,441
Allow Tauri requests by default (tauri://)
{ "login": "da-z", "id": 3681019, "node_id": "MDQ6VXNlcjM2ODEwMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3681019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/da-z", "html_url": "https://github.com/da-z", "followers_url": "https://api.github.com/users/da-z/followers", "following_url": "https://api.github.com/users/da-z/following{/other_user}", "gists_url": "https://api.github.com/users/da-z/gists{/gist_id}", "starred_url": "https://api.github.com/users/da-z/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/da-z/subscriptions", "organizations_url": "https://api.github.com/users/da-z/orgs", "repos_url": "https://api.github.com/users/da-z/repos", "events_url": "https://api.github.com/users/da-z/events{/privacy}", "received_events_url": "https://api.github.com/users/da-z/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-02-10T10:12:09
2024-11-21T05:53:16
2024-11-21T05:53:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2441", "html_url": "https://github.com/ollama/ollama/pull/2441", "diff_url": "https://github.com/ollama/ollama/pull/2441.diff", "patch_url": "https://github.com/ollama/ollama/pull/2441.patch", "merged_at": null }
In preparation of maybe supporting `tauri://` schema by default, I refactored a bit the CORS part of the config and added a test.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2441/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2441/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8535
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8535/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8535/comments
https://api.github.com/repos/ollama/ollama/issues/8535/events
https://github.com/ollama/ollama/issues/8535
2,804,389,409
I_kwDOJ0Z1Ps6nJ5Yh
8,535
Failsafe model download method?
{ "login": "paboum", "id": 54635274, "node_id": "MDQ6VXNlcjU0NjM1Mjc0", "avatar_url": "https://avatars.githubusercontent.com/u/54635274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/paboum", "html_url": "https://github.com/paboum", "followers_url": "https://api.github.com/users/paboum/followers", "following_url": "https://api.github.com/users/paboum/following{/other_user}", "gists_url": "https://api.github.com/users/paboum/gists{/gist_id}", "starred_url": "https://api.github.com/users/paboum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/paboum/subscriptions", "organizations_url": "https://api.github.com/users/paboum/orgs", "repos_url": "https://api.github.com/users/paboum/repos", "events_url": "https://api.github.com/users/paboum/events{/privacy}", "received_events_url": "https://api.github.com/users/paboum/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
10
2025-01-22T13:25:05
2025-01-30T09:12:47
2025-01-30T00:09:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm trying to pull the famous deepseek-r1 model today: ``` time=2025-01-22T14:22:30.734+01:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 23 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-22T14:22:30.734+01:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 1 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-22T14:22:30.734+01:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 22 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-22T14:22:30.734+01:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 24 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-22T14:22:30.734+01:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 2 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-22T14:22:30.735+01:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 16 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-22T14:22:30.735+01:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 26 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-22T14:22:30.735+01:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 17 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-22T14:22:30.735+01:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 11 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." ``` Obviously, due to the high demand for the new model, the download from central repository is problematic. Is there a secondary method available? E.g. you could share the file on torrents, I would download it and put in the right folder.
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8535/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8535/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6893
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6893/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6893/comments
https://api.github.com/repos/ollama/ollama/issues/6893/events
https://github.com/ollama/ollama/issues/6893
2,538,984,231
I_kwDOJ0Z1Ps6XVdMn
6,893
Llama3.170b through web api gives different quality then command line
{ "login": "remco-pc", "id": 8077908, "node_id": "MDQ6VXNlcjgwNzc5MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8077908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/remco-pc", "html_url": "https://github.com/remco-pc", "followers_url": "https://api.github.com/users/remco-pc/followers", "following_url": "https://api.github.com/users/remco-pc/following{/other_user}", "gists_url": "https://api.github.com/users/remco-pc/gists{/gist_id}", "starred_url": "https://api.github.com/users/remco-pc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/remco-pc/subscriptions", "organizations_url": "https://api.github.com/users/remco-pc/orgs", "repos_url": "https://api.github.com/users/remco-pc/repos", "events_url": "https://api.github.com/users/remco-pc/events{/privacy}", "received_events_url": "https://api.github.com/users/remco-pc/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-09-20T14:35:37
2024-09-20T16:41:14
2024-09-20T16:41:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? through web i ask this: can you give an svg circle example ? Below is simple SVG (alable Graphics) example draws a red: ``` <svg width="100 height="100 ="50"="50 r="40 stroke="green stroke-width4" fillred" /> svg> ``Here's breakdown of the used in this circle example- `` and ``: These the coordinates (, y) the center the circle. this case, are set to50, is half of width and of the SVG. `r`: attribute specifies the of the circle Here, it set to40. -stroke` andstroke-width`: `stroke` defines the of the outline the circle ( in this example while `stroke` sets thickness of the. - ``: This attribute the fill color the circle itselfred in this). You adjust these to customize appearance and of SVG circle While in the command line, it responses with good quality, are there different parameters, i am only using the defaults... ### OS Linux ### GPU _No response_ ### CPU Intel ### Ollama version ollama version is 0.3.6
{ "login": "remco-pc", "id": 8077908, "node_id": "MDQ6VXNlcjgwNzc5MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8077908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/remco-pc", "html_url": "https://github.com/remco-pc", "followers_url": "https://api.github.com/users/remco-pc/followers", "following_url": "https://api.github.com/users/remco-pc/following{/other_user}", "gists_url": "https://api.github.com/users/remco-pc/gists{/gist_id}", "starred_url": "https://api.github.com/users/remco-pc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/remco-pc/subscriptions", "organizations_url": "https://api.github.com/users/remco-pc/orgs", "repos_url": "https://api.github.com/users/remco-pc/repos", "events_url": "https://api.github.com/users/remco-pc/events{/privacy}", "received_events_url": "https://api.github.com/users/remco-pc/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6893/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2184
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2184/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2184/comments
https://api.github.com/repos/ollama/ollama/issues/2184/events
https://github.com/ollama/ollama/issues/2184
2,099,916,090
I_kwDOJ0Z1Ps59Ki06
2,184
docker swarm service create doesn't use GPU
{ "login": "go-laoji", "id": 92168729, "node_id": "U_kgDOBX5iGQ", "avatar_url": "https://avatars.githubusercontent.com/u/92168729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/go-laoji", "html_url": "https://github.com/go-laoji", "followers_url": "https://api.github.com/users/go-laoji/followers", "following_url": "https://api.github.com/users/go-laoji/following{/other_user}", "gists_url": "https://api.github.com/users/go-laoji/gists{/gist_id}", "starred_url": "https://api.github.com/users/go-laoji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/go-laoji/subscriptions", "organizations_url": "https://api.github.com/users/go-laoji/orgs", "repos_url": "https://api.github.com/users/go-laoji/repos", "events_url": "https://api.github.com/users/go-laoji/events{/privacy}", "received_events_url": "https://api.github.com/users/go-laoji/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
4
2024-01-25T09:12:33
2025-01-27T15:41:37
2024-03-27T20:46:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
``` docker service create \ --name ollama \ --mount type=bind,source=/tmp/ollama,destination=/root/.ollama \ --constraint node.role==worker \ --generic-resource "GPU=2" \ --mount type=bind,source=/dev/nvidia0,target=/dev/nvidia0 \ --mount type=bind,source=/dev/nvidiactl,target=/dev/nvidiactl \ --replicas 1 -p 11434:11434 ollama/ollama ``` use swarm service create,when service is running doesn't use gpu
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2184/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4533
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4533/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4533/comments
https://api.github.com/repos/ollama/ollama/issues/4533/events
https://github.com/ollama/ollama/pull/4533
2,305,071,842
PR_kwDOJ0Z1Ps5v6Qyq
4,533
Move the parser back + handle utf16 files
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-05-20T04:45:53
2024-05-20T18:26:46
2024-05-20T18:26:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4533", "html_url": "https://github.com/ollama/ollama/pull/4533", "diff_url": "https://github.com/ollama/ollama/pull/4533.diff", "patch_url": "https://github.com/ollama/ollama/pull/4533.patch", "merged_at": "2024-05-20T18:26:46" }
This moves the parser back to `parser/` and also adds support for decoding utf16le and utf16be files. Fixes #4503
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4533/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4533/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5210
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5210/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5210/comments
https://api.github.com/repos/ollama/ollama/issues/5210/events
https://github.com/ollama/ollama/pull/5210
2,367,552,559
PR_kwDOJ0Z1Ps5zPOgW
5,210
cabelo@opensuse.org - Add LTO
{ "login": "cabelo", "id": 675645, "node_id": "MDQ6VXNlcjY3NTY0NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/675645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cabelo", "html_url": "https://github.com/cabelo", "followers_url": "https://api.github.com/users/cabelo/followers", "following_url": "https://api.github.com/users/cabelo/following{/other_user}", "gists_url": "https://api.github.com/users/cabelo/gists{/gist_id}", "starred_url": "https://api.github.com/users/cabelo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cabelo/subscriptions", "organizations_url": "https://api.github.com/users/cabelo/orgs", "repos_url": "https://api.github.com/users/cabelo/repos", "events_url": "https://api.github.com/users/cabelo/events{/privacy}", "received_events_url": "https://api.github.com/users/cabelo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-06-22T04:40:59
2024-08-23T01:58:47
2024-08-23T01:58:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5210", "html_url": "https://github.com/ollama/ollama/pull/5210", "diff_url": "https://github.com/ollama/ollama/pull/5210.diff", "patch_url": "https://github.com/ollama/ollama/pull/5210.patch", "merged_at": null }
null
{ "login": "cabelo", "id": 675645, "node_id": "MDQ6VXNlcjY3NTY0NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/675645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cabelo", "html_url": "https://github.com/cabelo", "followers_url": "https://api.github.com/users/cabelo/followers", "following_url": "https://api.github.com/users/cabelo/following{/other_user}", "gists_url": "https://api.github.com/users/cabelo/gists{/gist_id}", "starred_url": "https://api.github.com/users/cabelo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cabelo/subscriptions", "organizations_url": "https://api.github.com/users/cabelo/orgs", "repos_url": "https://api.github.com/users/cabelo/repos", "events_url": "https://api.github.com/users/cabelo/events{/privacy}", "received_events_url": "https://api.github.com/users/cabelo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5210/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7244
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7244/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7244/comments
https://api.github.com/repos/ollama/ollama/issues/7244/events
https://github.com/ollama/ollama/issues/7244
2,595,506,668
I_kwDOJ0Z1Ps6atEns
7,244
Pulling models from private OCI Registries
{ "login": "mitja", "id": 234870, "node_id": "MDQ6VXNlcjIzNDg3MA==", "avatar_url": "https://avatars.githubusercontent.com/u/234870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mitja", "html_url": "https://github.com/mitja", "followers_url": "https://api.github.com/users/mitja/followers", "following_url": "https://api.github.com/users/mitja/following{/other_user}", "gists_url": "https://api.github.com/users/mitja/gists{/gist_id}", "starred_url": "https://api.github.com/users/mitja/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mitja/subscriptions", "organizations_url": "https://api.github.com/users/mitja/orgs", "repos_url": "https://api.github.com/users/mitja/repos", "events_url": "https://api.github.com/users/mitja/events{/privacy}", "received_events_url": "https://api.github.com/users/mitja/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
5
2024-10-17T18:57:43
2025-01-19T18:40:04
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
According to #2388 it should be possible to push and pull models to a Docker/OCI registry (without authentication). Even though it's an unsupported feature, I find it very useful and would like to contribute a short description how to do this. Potential use cases are - organisation-internal registries for orgs that limit internet access, - serving private models, - running Ollama on air gapped systems, and - saving bandwidth and download time at edge locations. I've tried it with a local docker registry: Pushing seems to work, pulling of the manifest works, as well, but pulling the blobs apparently did not work. Here is what I've tried: Run a local docker registry v2: ```bash docker run -d -p 5000:5000 --restart=always --name registry registry:2 ``` Copy a model and push it to the registry: ```bash ollama cp phi localhost:5000/mitja/phi ollama push localhost:5000/mitja/phi --insecure ``` Remove the copied model and pull it, again (that works, I believe because the blobs from the original phi model are still there): ```bash ollama rm localhost:5000/mitja/phi ollama pull localhost:5000/mitja/phi --insecure ``` Remove both the copied and the original model, then pull the model from the private registry, again (does not work): ```bash ollama rm phi ollama rm localhost:5000/mitja/phi ollama pull localhost:5000/mitja/phi --insecure ``` Runs into `Error: http: no Location header in response` Pull the original model, then the copied model (works): ```bash ollama pull phi ollama pull localhost:5000/mitja/phi --insecure ollama run localhost:5000/mitja/phi ``` Remove the registry container to clean up: ```bash docker stop /registry docker rm /registry ``` Did I miss a step or did I make a mistake, or is pushing/pulling the blobs not yet possible?
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7244/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4460
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4460/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4460/comments
https://api.github.com/repos/ollama/ollama/issues/4460/events
https://github.com/ollama/ollama/pull/4460
2,298,939,196
PR_kwDOJ0Z1Ps5vlpNE
4,460
fix the cpu estimatedTotal memory + get the expiry time for loading models
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-05-15T22:17:28
2024-05-15T22:29:39
2024-05-15T22:29:39
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4460", "html_url": "https://github.com/ollama/ollama/pull/4460", "diff_url": "https://github.com/ollama/ollama/pull/4460.diff", "patch_url": "https://github.com/ollama/ollama/pull/4460.patch", "merged_at": null }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4460/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1643
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1643/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1643/comments
https://api.github.com/repos/ollama/ollama/issues/1643/events
https://github.com/ollama/ollama/issues/1643
2,051,245,632
I_kwDOJ0Z1Ps56Q4ZA
1,643
Example to run ollama on OpenShift
{ "login": "jeremyssc", "id": 143193860, "node_id": "U_kgDOCIj3BA", "avatar_url": "https://avatars.githubusercontent.com/u/143193860?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeremyssc", "html_url": "https://github.com/jeremyssc", "followers_url": "https://api.github.com/users/jeremyssc/followers", "following_url": "https://api.github.com/users/jeremyssc/following{/other_user}", "gists_url": "https://api.github.com/users/jeremyssc/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeremyssc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeremyssc/subscriptions", "organizations_url": "https://api.github.com/users/jeremyssc/orgs", "repos_url": "https://api.github.com/users/jeremyssc/repos", "events_url": "https://api.github.com/users/jeremyssc/events{/privacy}", "received_events_url": "https://api.github.com/users/jeremyssc/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396191, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw", "url": "https://api.github.com/repos/ollama/ollama/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" }, { "id": 6677677816, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A", "url": "https://api.github.com/repos/ollama/ollama/labels/docker", "name": "docker", "color": "0052CC", "default": false, "description": "Issues relating to using ollama in containers" } ]
closed
false
null
[]
null
2
2023-12-20T20:47:50
2024-05-10T00:25:13
2024-05-10T00:25:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, I ran into a permission problem when running the Kubernetes example on OpenShift since the example didn't create a persistent volume claim and a volume. You will find attached to this issue a txt file with the manifests I used to make it work if it could help you. [openshift-ollama-example.txt](https://github.com/jmorganca/ollama/files/13732387/openshift-ollama-example.txt) Here is also the content of the file if the file isn't uploading: ``` --- apiVersion: v1 kind: Namespace metadata: name: ollama --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ollama-storage namespace: ollama spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi storageClassName: ocs-external-storagecluster-cephfs --- apiVersion: apps/v1 kind: Deployment metadata: name: ollama namespace: ollama spec: selector: matchLabels: name: ollama template: metadata: labels: name: ollama app: ollama-serve spec: containers: - name: ollama image: ollama/ollama:latest ports: - name: http containerPort: 11434 protocol: TCP terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /.ollama name: ollama-storage restartPolicy: Always volumes: - name: ollama-storage persistentVolumeClaim: claimName: ollama-storage --- apiVersion: v1 kind: Service metadata: name: ollama namespace: ollama spec: type: ClusterIP selector: name: ollama ports: - port: 80 name: http targetPort: http protocol: TCP --- kind: Route apiVersion: route.openshift.io/v1 metadata: name: ollama namespace: ollama labels: {} spec: to: kind: Service name: ollama tls: null port: targetPort: http ``` Thanks for the great project and take care :)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1643/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1643/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2011
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2011/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2011/comments
https://api.github.com/repos/ollama/ollama/issues/2011/events
https://github.com/ollama/ollama/issues/2011
2,083,121,620
I_kwDOJ0Z1Ps58KenU
2,011
Parameters loaded from Modelfile are cast to int in /show parameters
{ "login": "nathanpbell", "id": 3697, "node_id": "MDQ6VXNlcjM2OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3697?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nathanpbell", "html_url": "https://github.com/nathanpbell", "followers_url": "https://api.github.com/users/nathanpbell/followers", "following_url": "https://api.github.com/users/nathanpbell/following{/other_user}", "gists_url": "https://api.github.com/users/nathanpbell/gists{/gist_id}", "starred_url": "https://api.github.com/users/nathanpbell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nathanpbell/subscriptions", "organizations_url": "https://api.github.com/users/nathanpbell/orgs", "repos_url": "https://api.github.com/users/nathanpbell/repos", "events_url": "https://api.github.com/users/nathanpbell/events{/privacy}", "received_events_url": "https://api.github.com/users/nathanpbell/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-01-16T06:44:17
2024-01-16T18:35:25
2024-01-16T18:35:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It appears if I set float value parameters in the Modelfile, when I run that model and run `/show parameters` those floats get cast to ints. ### Steps to reproduce Create a Modelfile: ``` FROM mistral:text PARAMETER num_ctx 32000 PARAMETER seed 42 PARAMETER num_predict 128 PARAMETER temperature 0.7 PARAMETER top_p 0.9 ``` Create the model: ``` ollama create mymodel -f Modelfile ``` Run the model: ``` ollama run mymodel ``` Ask for the parameters: ``` >>> /show parameters Model defined parameters: seed 42 temperature 1 top_p 1 num_ctx 32000 num_predict 128 ``` You'll see that "top_p" and "temperature" have been rounded to integer value `1`.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2011/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6022
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6022/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6022/comments
https://api.github.com/repos/ollama/ollama/issues/6022/events
https://github.com/ollama/ollama/issues/6022
2,433,675,409
I_kwDOJ0Z1Ps6RDvCR
6,022
ollama version is 0.0.0 (windows preview)
{ "login": "dispather", "id": 62810211, "node_id": "MDQ6VXNlcjYyODEwMjEx", "avatar_url": "https://avatars.githubusercontent.com/u/62810211?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dispather", "html_url": "https://github.com/dispather", "followers_url": "https://api.github.com/users/dispather/followers", "following_url": "https://api.github.com/users/dispather/following{/other_user}", "gists_url": "https://api.github.com/users/dispather/gists{/gist_id}", "starred_url": "https://api.github.com/users/dispather/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dispather/subscriptions", "organizations_url": "https://api.github.com/users/dispather/orgs", "repos_url": "https://api.github.com/users/dispather/repos", "events_url": "https://api.github.com/users/dispather/events{/privacy}", "received_events_url": "https://api.github.com/users/dispather/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-07-28T00:46:25
2024-07-28T13:07:01
2024-07-28T13:07:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I installed windows preview ollama and I found gpu is not working when using ollama. So I check ollama version. C:\Users\mightyhun\AppData\Local\Programs\Ollama>ollama -v ollama version is 0.0.0 Warning: client version is 0.3.0 It shows mismatch of ollama version. and I checked ollama location. C:\Users\mightyhun\AppData\Local\Programs\Ollama>where ollama C:\Users\mightyhun\AppData\Local\Programs\Ollama\ollama.exe As you see, ollama is running correct location but, it shows wrong version, not showing tray icon. I delete wsl2 ubuntu not to interfere windows runnnig env. what can I do to solve this? ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.3.0
{ "login": "dispather", "id": 62810211, "node_id": "MDQ6VXNlcjYyODEwMjEx", "avatar_url": "https://avatars.githubusercontent.com/u/62810211?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dispather", "html_url": "https://github.com/dispather", "followers_url": "https://api.github.com/users/dispather/followers", "following_url": "https://api.github.com/users/dispather/following{/other_user}", "gists_url": "https://api.github.com/users/dispather/gists{/gist_id}", "starred_url": "https://api.github.com/users/dispather/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dispather/subscriptions", "organizations_url": "https://api.github.com/users/dispather/orgs", "repos_url": "https://api.github.com/users/dispather/repos", "events_url": "https://api.github.com/users/dispather/events{/privacy}", "received_events_url": "https://api.github.com/users/dispather/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6022/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4900
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4900/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4900/comments
https://api.github.com/repos/ollama/ollama/issues/4900/events
https://github.com/ollama/ollama/issues/4900
2,339,921,364
I_kwDOJ0Z1Ps6LeF3U
4,900
MiniCPM-Llama3-V-2_5
{ "login": "kotaxyz", "id": 105466290, "node_id": "U_kgDOBklJsg", "avatar_url": "https://avatars.githubusercontent.com/u/105466290?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kotaxyz", "html_url": "https://github.com/kotaxyz", "followers_url": "https://api.github.com/users/kotaxyz/followers", "following_url": "https://api.github.com/users/kotaxyz/following{/other_user}", "gists_url": "https://api.github.com/users/kotaxyz/gists{/gist_id}", "starred_url": "https://api.github.com/users/kotaxyz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kotaxyz/subscriptions", "organizations_url": "https://api.github.com/users/kotaxyz/orgs", "repos_url": "https://api.github.com/users/kotaxyz/repos", "events_url": "https://api.github.com/users/kotaxyz/events{/privacy}", "received_events_url": "https://api.github.com/users/kotaxyz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
19
2024-06-07T08:43:55
2024-08-13T03:22:20
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This is the best open source vision model i have ever tried , We need support for it in ollama
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4900/reactions", "total_count": 17, "+1": 14, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4900/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1703
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1703/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1703/comments
https://api.github.com/repos/ollama/ollama/issues/1703/events
https://github.com/ollama/ollama/issues/1703
2,055,421,973
I_kwDOJ0Z1Ps56g0AV
1,703
Error: llama runner process has terminated. when running dolphin-mixtral
{ "login": "G-only1", "id": 96492140, "node_id": "U_kgDOBcBabA", "avatar_url": "https://avatars.githubusercontent.com/u/96492140?v=4", "gravatar_id": "", "url": "https://api.github.com/users/G-only1", "html_url": "https://github.com/G-only1", "followers_url": "https://api.github.com/users/G-only1/followers", "following_url": "https://api.github.com/users/G-only1/following{/other_user}", "gists_url": "https://api.github.com/users/G-only1/gists{/gist_id}", "starred_url": "https://api.github.com/users/G-only1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/G-only1/subscriptions", "organizations_url": "https://api.github.com/users/G-only1/orgs", "repos_url": "https://api.github.com/users/G-only1/repos", "events_url": "https://api.github.com/users/G-only1/events{/privacy}", "received_events_url": "https://api.github.com/users/G-only1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
9
2023-12-25T06:21:44
2024-01-08T21:42:05
2024-01-08T21:42:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
when i run ollama run dolphin-mixtral it gives the error Error: llama runner process has terminated.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1703/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1703/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1285
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1285/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1285/comments
https://api.github.com/repos/ollama/ollama/issues/1285/events
https://github.com/ollama/ollama/issues/1285
2,011,848,986
I_kwDOJ0Z1Ps536mEa
1,285
Support `GPT2LMHeadModel` architecture
{ "login": "jhagelback", "id": 3829669, "node_id": "MDQ6VXNlcjM4Mjk2Njk=", "avatar_url": "https://avatars.githubusercontent.com/u/3829669?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jhagelback", "html_url": "https://github.com/jhagelback", "followers_url": "https://api.github.com/users/jhagelback/followers", "following_url": "https://api.github.com/users/jhagelback/following{/other_user}", "gists_url": "https://api.github.com/users/jhagelback/gists{/gist_id}", "starred_url": "https://api.github.com/users/jhagelback/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jhagelback/subscriptions", "organizations_url": "https://api.github.com/users/jhagelback/orgs", "repos_url": "https://api.github.com/users/jhagelback/repos", "events_url": "https://api.github.com/users/jhagelback/events{/privacy}", "received_events_url": "https://api.github.com/users/jhagelback/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2023-11-27T09:23:49
2024-12-23T03:19:27
2024-12-23T03:19:27
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Any plans on supporting _GPT2LMHeadModel_ architecture?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1285/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7843
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7843/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7843/comments
https://api.github.com/repos/ollama/ollama/issues/7843/events
https://github.com/ollama/ollama/issues/7843
2,695,221,887
I_kwDOJ0Z1Ps6gpdJ_
7,843
New Tool Calling issues.
{ "login": "AssassinUKG", "id": 5285547, "node_id": "MDQ6VXNlcjUyODU1NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/5285547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AssassinUKG", "html_url": "https://github.com/AssassinUKG", "followers_url": "https://api.github.com/users/AssassinUKG/followers", "following_url": "https://api.github.com/users/AssassinUKG/following{/other_user}", "gists_url": "https://api.github.com/users/AssassinUKG/gists{/gist_id}", "starred_url": "https://api.github.com/users/AssassinUKG/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AssassinUKG/subscriptions", "organizations_url": "https://api.github.com/users/AssassinUKG/orgs", "repos_url": "https://api.github.com/users/AssassinUKG/repos", "events_url": "https://api.github.com/users/AssassinUKG/events{/privacy}", "received_events_url": "https://api.github.com/users/AssassinUKG/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-11-26T16:02:31
2024-11-26T16:08:20
2024-11-26T16:08:20
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? In my image below you can see me using the code functions provided by the ollama python examples on GitHub, specifically the chat.py example (added client for local ollama) Sometimes the values are treated as strings and other times as integer, although the function definition is an int type. ```python def add_two_numbers(a: int, b: int) -> int: """ Add two numbers Args: a (int): The first number b (int): The second number Returns: int: The sum of the two numbers """ return a + b ``` ![image](https://github.com/user-attachments/assets/2e8e409f-76ec-4a91-a679-8d9b4bb70410) How to resolve this as other tools will likely suffer the same issue. ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version ollama version is 0.4.5
{ "login": "AssassinUKG", "id": 5285547, "node_id": "MDQ6VXNlcjUyODU1NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/5285547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AssassinUKG", "html_url": "https://github.com/AssassinUKG", "followers_url": "https://api.github.com/users/AssassinUKG/followers", "following_url": "https://api.github.com/users/AssassinUKG/following{/other_user}", "gists_url": "https://api.github.com/users/AssassinUKG/gists{/gist_id}", "starred_url": "https://api.github.com/users/AssassinUKG/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AssassinUKG/subscriptions", "organizations_url": "https://api.github.com/users/AssassinUKG/orgs", "repos_url": "https://api.github.com/users/AssassinUKG/repos", "events_url": "https://api.github.com/users/AssassinUKG/events{/privacy}", "received_events_url": "https://api.github.com/users/AssassinUKG/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7843/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6863
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6863/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6863/comments
https://api.github.com/repos/ollama/ollama/issues/6863/events
https://github.com/ollama/ollama/issues/6863
2,534,994,308
I_kwDOJ0Z1Ps6XGPGE
6,863
Qwen/Qwen2.5-Coder-7B
{ "login": "wuweinero", "id": 32291523, "node_id": "MDQ6VXNlcjMyMjkxNTIz", "avatar_url": "https://avatars.githubusercontent.com/u/32291523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wuweinero", "html_url": "https://github.com/wuweinero", "followers_url": "https://api.github.com/users/wuweinero/followers", "following_url": "https://api.github.com/users/wuweinero/following{/other_user}", "gists_url": "https://api.github.com/users/wuweinero/gists{/gist_id}", "starred_url": "https://api.github.com/users/wuweinero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wuweinero/subscriptions", "organizations_url": "https://api.github.com/users/wuweinero/orgs", "repos_url": "https://api.github.com/users/wuweinero/repos", "events_url": "https://api.github.com/users/wuweinero/events{/privacy}", "received_events_url": "https://api.github.com/users/wuweinero/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
4
2024-09-19T00:24:37
2024-09-21T02:36:24
2024-09-20T17:36:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/Qwen/Qwen2.5-Coder-7B
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6863/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6863/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2938
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2938/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2938/comments
https://api.github.com/repos/ollama/ollama/issues/2938/events
https://github.com/ollama/ollama/issues/2938
2,169,706,145
I_kwDOJ0Z1Ps6BUxah
2,938
Windows install path
{ "login": "pozzo-balbi", "id": 3755138, "node_id": "MDQ6VXNlcjM3NTUxMzg=", "avatar_url": "https://avatars.githubusercontent.com/u/3755138?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pozzo-balbi", "html_url": "https://github.com/pozzo-balbi", "followers_url": "https://api.github.com/users/pozzo-balbi/followers", "following_url": "https://api.github.com/users/pozzo-balbi/following{/other_user}", "gists_url": "https://api.github.com/users/pozzo-balbi/gists{/gist_id}", "starred_url": "https://api.github.com/users/pozzo-balbi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pozzo-balbi/subscriptions", "organizations_url": "https://api.github.com/users/pozzo-balbi/orgs", "repos_url": "https://api.github.com/users/pozzo-balbi/repos", "events_url": "https://api.github.com/users/pozzo-balbi/events{/privacy}", "received_events_url": "https://api.github.com/users/pozzo-balbi/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
13
2024-03-05T16:51:30
2024-11-23T19:07:45
2024-03-21T13:20:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, please add an option to choose an installation path, e.g. c:\program files\ollama during install. Installing under the user's home directory is security wise a bad idea. Thanks
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2938/reactions", "total_count": 25, "+1": 25, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2938/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2953
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2953/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2953/comments
https://api.github.com/repos/ollama/ollama/issues/2953/events
https://github.com/ollama/ollama/issues/2953
2,171,510,787
I_kwDOJ0Z1Ps6BbqAD
2,953
EOF of starcoder2:15b on Ollama 0.1.28
{ "login": "owenzhao", "id": 2182896, "node_id": "MDQ6VXNlcjIxODI4OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/2182896?v=4", "gravatar_id": "", "url": "https://api.github.com/users/owenzhao", "html_url": "https://github.com/owenzhao", "followers_url": "https://api.github.com/users/owenzhao/followers", "following_url": "https://api.github.com/users/owenzhao/following{/other_user}", "gists_url": "https://api.github.com/users/owenzhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/owenzhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/owenzhao/subscriptions", "organizations_url": "https://api.github.com/users/owenzhao/orgs", "repos_url": "https://api.github.com/users/owenzhao/repos", "events_url": "https://api.github.com/users/owenzhao/events{/privacy}", "received_events_url": "https://api.github.com/users/owenzhao/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
38
2024-03-06T13:26:53
2024-03-21T20:06:50
2024-03-12T00:11:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Mac mini M1 16GB 512GB macOS Sonoma 14.4 (23E214) ```bash ollama run starcoder2:15b pulling manifest pulling dc5deb763c38... 100% ▕████████████████████████████████████████████████▏ 9.1 GB pulling 4ec42cd966c9... 100% ▕████████████████████████████████████████████████▏ 12 KB pulling 5671842f8d52... 100% ▕████████████████████████████████████████████████▏ 346 B verifying sha256 digest writing manifest removing any unused layers success Error: Post "http://127.0.0.1:11434/api/chat": EOF ``` ollama serve output ```bash created by net/http.(*Transport).dialConn in goroutine 356 net/http/transport.go:1800 +0x1060 r0 0x0 r1 0x0 r2 0x5 r3 0x1934b8848 r4 0x73 r5 0x6e r6 0x32 r7 0x0 r8 0x600003532460 r9 0x0 r10 0x30 r11 0xc0452f67cb792c67 r12 0xc949d7c7509e6557 r13 0x386a188e1da18799 r14 0x2de4b19f0114bd9f r15 0x4c r16 0x1934b8900 r17 0xda000 r18 0x0 r19 0x1759c63b0 r20 0x65646f6372617473 r21 0xa r22 0x0 r23 0x12e4e8878 r24 0x600003532460 r25 0x175003272 r26 0x1759c6428 r27 0x1759c6420 r28 0x1759c63c0 r29 0x1759c62f0 lr 0x12e3aa824 sp 0x1759c6100 pc 0x1934b8904 fault 0x0 ``` Other models with similar size worked fine. For example, "qwen:14b" worked. ```bash qwen:14b 80362ced6553 8.2 GB 3 days ago ```
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2953/reactions", "total_count": 17, "+1": 17, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2953/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/22
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/22/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/22/comments
https://api.github.com/repos/ollama/ollama/issues/22/events
https://github.com/ollama/ollama/issues/22
1,781,579,627
I_kwDOJ0Z1Ps5qML9r
22
add a flag to override template prompts
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2023-06-29T22:11:13
2023-07-24T20:50:17
2023-07-24T20:50:17
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/22/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/22/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3531
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3531/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3531/comments
https://api.github.com/repos/ollama/ollama/issues/3531/events
https://github.com/ollama/ollama/issues/3531
2,230,171,167
I_kwDOJ0Z1Ps6E7bYf
3,531
Installation failure on linux due to directory `/usr/share/ollama` not exists
{ "login": "hualet", "id": 2023967, "node_id": "MDQ6VXNlcjIwMjM5Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/2023967?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hualet", "html_url": "https://github.com/hualet", "followers_url": "https://api.github.com/users/hualet/followers", "following_url": "https://api.github.com/users/hualet/following{/other_user}", "gists_url": "https://api.github.com/users/hualet/gists{/gist_id}", "starred_url": "https://api.github.com/users/hualet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hualet/subscriptions", "organizations_url": "https://api.github.com/users/hualet/orgs", "repos_url": "https://api.github.com/users/hualet/repos", "events_url": "https://api.github.com/users/hualet/events{/privacy}", "received_events_url": "https://api.github.com/users/hualet/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-04-08T03:20:25
2024-05-05T00:35:28
2024-05-05T00:34:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Installation failure on linux due to directory `/usr/share/ollama` not exists ➜ ~ curl -fsSL https://ollama.com/install.sh | sh >>> Downloading ollama... ######################################################################## 100.0%##O#- # ######################################################################## 100.0% >>> Installing ollama to /usr/local/bin... 请输入密码 验证成功 >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... >>> Downloading AMD GPU dependencies... chmod: 无法访问'/usr/share/ollama': 没有那个文件或目录 ### What did you expect to see? Install success. ### Steps to reproduce Just run the command `curl -fsSL https://ollama.com/install.sh | sh ` on Linux machine. ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture amd64 ### Platform _No response_ ### Ollama version _No response_ ### GPU AMD ### GPU info _No response_ ### CPU AMD ### Other software _No response_
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3531/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1396
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1396/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1396/comments
https://api.github.com/repos/ollama/ollama/issues/1396/events
https://github.com/ollama/ollama/issues/1396
2,027,758,522
I_kwDOJ0Z1Ps543SO6
1,396
Continuous batching support
{ "login": "Huvinesh-Rajendran-12", "id": 81321926, "node_id": "MDQ6VXNlcjgxMzIxOTI2", "avatar_url": "https://avatars.githubusercontent.com/u/81321926?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Huvinesh-Rajendran-12", "html_url": "https://github.com/Huvinesh-Rajendran-12", "followers_url": "https://api.github.com/users/Huvinesh-Rajendran-12/followers", "following_url": "https://api.github.com/users/Huvinesh-Rajendran-12/following{/other_user}", "gists_url": "https://api.github.com/users/Huvinesh-Rajendran-12/gists{/gist_id}", "starred_url": "https://api.github.com/users/Huvinesh-Rajendran-12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Huvinesh-Rajendran-12/subscriptions", "organizations_url": "https://api.github.com/users/Huvinesh-Rajendran-12/orgs", "repos_url": "https://api.github.com/users/Huvinesh-Rajendran-12/repos", "events_url": "https://api.github.com/users/Huvinesh-Rajendran-12/events{/privacy}", "received_events_url": "https://api.github.com/users/Huvinesh-Rajendran-12/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
14
2023-12-06T06:28:28
2024-09-04T03:35:49
2024-09-04T03:35:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Does Ollama support continuous batching for concurrent requests? I couldn't find anything in the documentation.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1396/reactions", "total_count": 16, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 5 }
https://api.github.com/repos/ollama/ollama/issues/1396/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/605
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/605/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/605/comments
https://api.github.com/repos/ollama/ollama/issues/605/events
https://github.com/ollama/ollama/pull/605
1,913,902,538
PR_kwDOJ0Z1Ps5bQJq9
605
do not unload nouveau driver
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-09-26T16:37:51
2023-09-26T16:53:06
2023-09-26T16:53:05
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/605", "html_url": "https://github.com/ollama/ollama/pull/605", "diff_url": "https://github.com/ollama/ollama/pull/605.diff", "patch_url": "https://github.com/ollama/ollama/pull/605.patch", "merged_at": "2023-09-26T16:53:05" }
unloading this driver on a desktop kills the display which is not optimal. instead, inform the user they need to reboot
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/605/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4578
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4578/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4578/comments
https://api.github.com/repos/ollama/ollama/issues/4578/events
https://github.com/ollama/ollama/pull/4578
2,311,037,663
PR_kwDOJ0Z1Ps5wOyDI
4,578
add phi 3 medium & moondream 2 in readme
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-05-22T16:53:22
2024-05-22T16:53:46
2024-05-22T16:53:45
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4578", "html_url": "https://github.com/ollama/ollama/pull/4578", "diff_url": "https://github.com/ollama/ollama/pull/4578.diff", "patch_url": "https://github.com/ollama/ollama/pull/4578.patch", "merged_at": "2024-05-22T16:53:45" }
null
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4578/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1301
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1301/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1301/comments
https://api.github.com/repos/ollama/ollama/issues/1301/events
https://github.com/ollama/ollama/pull/1301
2,014,300,064
PR_kwDOJ0Z1Ps5gi_q2
1,301
Correct MacOS Host Port in FAQ
{ "login": "ToasterUwU", "id": 43654377, "node_id": "MDQ6VXNlcjQzNjU0Mzc3", "avatar_url": "https://avatars.githubusercontent.com/u/43654377?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ToasterUwU", "html_url": "https://github.com/ToasterUwU", "followers_url": "https://api.github.com/users/ToasterUwU/followers", "following_url": "https://api.github.com/users/ToasterUwU/following{/other_user}", "gists_url": "https://api.github.com/users/ToasterUwU/gists{/gist_id}", "starred_url": "https://api.github.com/users/ToasterUwU/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ToasterUwU/subscriptions", "organizations_url": "https://api.github.com/users/ToasterUwU/orgs", "repos_url": "https://api.github.com/users/ToasterUwU/repos", "events_url": "https://api.github.com/users/ToasterUwU/events{/privacy}", "received_events_url": "https://api.github.com/users/ToasterUwU/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-11-28T12:11:17
2023-11-29T18:26:58
2023-11-29T16:44:04
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1301", "html_url": "https://github.com/ollama/ollama/pull/1301", "diff_url": "https://github.com/ollama/ollama/pull/1301.diff", "patch_url": "https://github.com/ollama/ollama/pull/1301.patch", "merged_at": "2023-11-29T16:44:04" }
For some reason, the port for MacOS in this how-to was different then the one mentioned before and the one used after in the linux example. Skimming over this and copy pasting this as a Mac user, would result in the ollama program running on a different port and making it unreachable unless the port is changed in all other settings of all things using ollama. If this was meant to show that you can use other ports, and wasnt just a typo, this would be a very bad way to do it. One single digit being different is nothing a normal person catches and then understands as "You can do this as well"
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1301/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/746
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/746/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/746/comments
https://api.github.com/repos/ollama/ollama/issues/746/events
https://github.com/ollama/ollama/issues/746
1,934,101,601
I_kwDOJ0Z1Ps5zSAxh
746
Support multi-modal models
{ "login": "arian81", "id": 35879206, "node_id": "MDQ6VXNlcjM1ODc5MjA2", "avatar_url": "https://avatars.githubusercontent.com/u/35879206?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arian81", "html_url": "https://github.com/arian81", "followers_url": "https://api.github.com/users/arian81/followers", "following_url": "https://api.github.com/users/arian81/following{/other_user}", "gists_url": "https://api.github.com/users/arian81/gists{/gist_id}", "starred_url": "https://api.github.com/users/arian81/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arian81/subscriptions", "organizations_url": "https://api.github.com/users/arian81/orgs", "repos_url": "https://api.github.com/users/arian81/repos", "events_url": "https://api.github.com/users/arian81/events{/privacy}", "received_events_url": "https://api.github.com/users/arian81/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
21
2023-10-10T01:14:13
2024-03-06T13:12:22
2023-12-16T09:00:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This is one of the best open source multi modals based on llama 7 currently. It would nice to be able to host it in ollama. https://llava-vl.github.io/
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/746/reactions", "total_count": 23, "+1": 23, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/746/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8494
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8494/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8494/comments
https://api.github.com/repos/ollama/ollama/issues/8494/events
https://github.com/ollama/ollama/issues/8494
2,798,081,134
I_kwDOJ0Z1Ps6mx1Ru
8,494
Can the /API/Chat interface support session related parameters?
{ "login": "lx687", "id": 192780267, "node_id": "U_kgDOC32X6w", "avatar_url": "https://avatars.githubusercontent.com/u/192780267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lx687", "html_url": "https://github.com/lx687", "followers_url": "https://api.github.com/users/lx687/followers", "following_url": "https://api.github.com/users/lx687/following{/other_user}", "gists_url": "https://api.github.com/users/lx687/gists{/gist_id}", "starred_url": "https://api.github.com/users/lx687/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lx687/subscriptions", "organizations_url": "https://api.github.com/users/lx687/orgs", "repos_url": "https://api.github.com/users/lx687/repos", "events_url": "https://api.github.com/users/lx687/events{/privacy}", "received_events_url": "https://api.github.com/users/lx687/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2025-01-20T03:21:54
2025-01-24T09:30:06
2025-01-24T09:30:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I want to control the Q&A behavior of the same user through session ID data,Does the current interface support it? ![Image](https://github.com/user-attachments/assets/407a0515-8118-4612-958f-2ca6f6f74fbc)
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8494/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8494/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/479
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/479/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/479/comments
https://api.github.com/repos/ollama/ollama/issues/479/events
https://github.com/ollama/ollama/pull/479
1,884,836,473
PR_kwDOJ0Z1Ps5ZuhiR
479
update dockerfile
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-09-06T22:25:53
2023-09-06T22:44:25
2023-09-06T22:44:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/479", "html_url": "https://github.com/ollama/ollama/pull/479", "diff_url": "https://github.com/ollama/ollama/pull/479.diff", "patch_url": "https://github.com/ollama/ollama/pull/479.patch", "merged_at": "2023-09-06T22:44:24" }
``` docker build -t ollama . docker run -d -p 11434:11434 -v $HOME/.ollama:/home/ollama/.ollama ollama ``` This container image does not build GPU. That'll come later, after #454
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/479/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2725
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2725/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2725/comments
https://api.github.com/repos/ollama/ollama/issues/2725/events
https://github.com/ollama/ollama/issues/2725
2,152,229,961
I_kwDOJ0Z1Ps6ASGxJ
2,725
Ping api endpoint for more efficient network scanning
{ "login": "danemadsen", "id": 11537699, "node_id": "MDQ6VXNlcjExNTM3Njk5", "avatar_url": "https://avatars.githubusercontent.com/u/11537699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danemadsen", "html_url": "https://github.com/danemadsen", "followers_url": "https://api.github.com/users/danemadsen/followers", "following_url": "https://api.github.com/users/danemadsen/following{/other_user}", "gists_url": "https://api.github.com/users/danemadsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/danemadsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danemadsen/subscriptions", "organizations_url": "https://api.github.com/users/danemadsen/orgs", "repos_url": "https://api.github.com/users/danemadsen/repos", "events_url": "https://api.github.com/users/danemadsen/events{/privacy}", "received_events_url": "https://api.github.com/users/danemadsen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-02-24T09:27:14
2024-03-01T02:13:17
2024-03-01T02:13:16
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
currently im using the /api/tags endpoint for automated scanning of the network to find ollama. This is working fine but it may be better to have a dedicated ping endpoint for this kind of operation.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2725/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3588
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3588/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3588/comments
https://api.github.com/repos/ollama/ollama/issues/3588/events
https://github.com/ollama/ollama/pull/3588
2,237,005,372
PR_kwDOJ0Z1Ps5sUmHe
3,588
remove header while getting model list
{ "login": "deepakdeore2004", "id": 313430, "node_id": "MDQ6VXNlcjMxMzQzMA==", "avatar_url": "https://avatars.githubusercontent.com/u/313430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deepakdeore2004", "html_url": "https://github.com/deepakdeore2004", "followers_url": "https://api.github.com/users/deepakdeore2004/followers", "following_url": "https://api.github.com/users/deepakdeore2004/following{/other_user}", "gists_url": "https://api.github.com/users/deepakdeore2004/gists{/gist_id}", "starred_url": "https://api.github.com/users/deepakdeore2004/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deepakdeore2004/subscriptions", "organizations_url": "https://api.github.com/users/deepakdeore2004/orgs", "repos_url": "https://api.github.com/users/deepakdeore2004/repos", "events_url": "https://api.github.com/users/deepakdeore2004/events{/privacy}", "received_events_url": "https://api.github.com/users/deepakdeore2004/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-04-11T06:31:14
2024-06-10T03:27:11
2024-06-10T01:57:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3588", "html_url": "https://github.com/ollama/ollama/pull/3588", "diff_url": "https://github.com/ollama/ollama/pull/3588.diff", "patch_url": "https://github.com/ollama/ollama/pull/3588.patch", "merged_at": null }
null
{ "login": "deepakdeore2004", "id": 313430, "node_id": "MDQ6VXNlcjMxMzQzMA==", "avatar_url": "https://avatars.githubusercontent.com/u/313430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deepakdeore2004", "html_url": "https://github.com/deepakdeore2004", "followers_url": "https://api.github.com/users/deepakdeore2004/followers", "following_url": "https://api.github.com/users/deepakdeore2004/following{/other_user}", "gists_url": "https://api.github.com/users/deepakdeore2004/gists{/gist_id}", "starred_url": "https://api.github.com/users/deepakdeore2004/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deepakdeore2004/subscriptions", "organizations_url": "https://api.github.com/users/deepakdeore2004/orgs", "repos_url": "https://api.github.com/users/deepakdeore2004/repos", "events_url": "https://api.github.com/users/deepakdeore2004/events{/privacy}", "received_events_url": "https://api.github.com/users/deepakdeore2004/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3588/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1812
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1812/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1812/comments
https://api.github.com/repos/ollama/ollama/issues/1812/events
https://github.com/ollama/ollama/issues/1812
2,067,828,105
I_kwDOJ0Z1Ps57QI2J
1,812
IMPROVEMENT: Proper calcuation of the KV cache size inside of gpu::NumGPU() instead of the 3/4 magic number...
{ "login": "jukofyork", "id": 69222624, "node_id": "MDQ6VXNlcjY5MjIyNjI0", "avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jukofyork", "html_url": "https://github.com/jukofyork", "followers_url": "https://api.github.com/users/jukofyork/followers", "following_url": "https://api.github.com/users/jukofyork/following{/other_user}", "gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}", "starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions", "organizations_url": "https://api.github.com/users/jukofyork/orgs", "repos_url": "https://api.github.com/users/jukofyork/repos", "events_url": "https://api.github.com/users/jukofyork/events{/privacy}", "received_events_url": "https://api.github.com/users/jukofyork/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2024-01-05T18:13:55
2024-01-08T21:42:01
2024-01-08T21:42:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
See: https://github.com/jmorganca/ollama/issues/1800#issuecomment-1878955910 Feel free to pull out the stuff from that thread - it's only in there as I did quite a lot of research on this to try to figure out the OOM errors.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1812/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8275
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8275/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8275/comments
https://api.github.com/repos/ollama/ollama/issues/8275/events
https://github.com/ollama/ollama/issues/8275
2,764,530,224
I_kwDOJ0Z1Ps6kx2Iw
8,275
Magnet download
{ "login": "Zig-VS-TypeScript-VS", "id": 192610801, "node_id": "U_kgDOC3sB8Q", "avatar_url": "https://avatars.githubusercontent.com/u/192610801?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zig-VS-TypeScript-VS", "html_url": "https://github.com/Zig-VS-TypeScript-VS", "followers_url": "https://api.github.com/users/Zig-VS-TypeScript-VS/followers", "following_url": "https://api.github.com/users/Zig-VS-TypeScript-VS/following{/other_user}", "gists_url": "https://api.github.com/users/Zig-VS-TypeScript-VS/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zig-VS-TypeScript-VS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zig-VS-TypeScript-VS/subscriptions", "organizations_url": "https://api.github.com/users/Zig-VS-TypeScript-VS/orgs", "repos_url": "https://api.github.com/users/Zig-VS-TypeScript-VS/repos", "events_url": "https://api.github.com/users/Zig-VS-TypeScript-VS/events{/privacy}", "received_events_url": "https://api.github.com/users/Zig-VS-TypeScript-VS/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-12-31T16:38:09
2025-01-08T17:42:39
2025-01-08T17:42:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Support Magnet download model. ollama Magnet saves bandwidth, disk life, faster speed. ### Need to do - ollama site models Generates Magnet link. - ollama Magnet seeding.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8275/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8275/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1727
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1727/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1727/comments
https://api.github.com/repos/ollama/ollama/issues/1727/events
https://github.com/ollama/ollama/issues/1727
2,057,088,840
I_kwDOJ0Z1Ps56nK9I
1,727
ollama doesn't use system RAM
{ "login": "DrGood01", "id": 130962326, "node_id": "U_kgDOB85Tlg", "avatar_url": "https://avatars.githubusercontent.com/u/130962326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DrGood01", "html_url": "https://github.com/DrGood01", "followers_url": "https://api.github.com/users/DrGood01/followers", "following_url": "https://api.github.com/users/DrGood01/following{/other_user}", "gists_url": "https://api.github.com/users/DrGood01/gists{/gist_id}", "starred_url": "https://api.github.com/users/DrGood01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrGood01/subscriptions", "organizations_url": "https://api.github.com/users/DrGood01/orgs", "repos_url": "https://api.github.com/users/DrGood01/repos", "events_url": "https://api.github.com/users/DrGood01/events{/privacy}", "received_events_url": "https://api.github.com/users/DrGood01/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
29
2023-12-27T08:43:44
2025-01-15T16:12:00
2024-05-16T23:11:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm running Ollama on a ubuntu 22 linux laptop with 32 G of RAM and a NVIDIA gtx 1650. Ollama loads the models exclusively in the graphic card RAM, and doesn't use any of the system RAM at all. Very frustrating, as it exists with "Error: llama runner exited, you may not have enough available memory to run this model" as soon as I try to chat...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1727/reactions", "total_count": 15, "+1": 15, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1727/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8186
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8186/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8186/comments
https://api.github.com/repos/ollama/ollama/issues/8186/events
https://github.com/ollama/ollama/issues/8186
2,753,145,864
I_kwDOJ0Z1Ps6kGawI
8,186
mllama doesn't support parallel requests yet - llama3.2-vision:11b for Standard_NC24ads_A100_v4
{ "login": "breddy-lgamerica", "id": 90788463, "node_id": "MDQ6VXNlcjkwNzg4NDYz", "avatar_url": "https://avatars.githubusercontent.com/u/90788463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/breddy-lgamerica", "html_url": "https://github.com/breddy-lgamerica", "followers_url": "https://api.github.com/users/breddy-lgamerica/followers", "following_url": "https://api.github.com/users/breddy-lgamerica/following{/other_user}", "gists_url": "https://api.github.com/users/breddy-lgamerica/gists{/gist_id}", "starred_url": "https://api.github.com/users/breddy-lgamerica/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/breddy-lgamerica/subscriptions", "organizations_url": "https://api.github.com/users/breddy-lgamerica/orgs", "repos_url": "https://api.github.com/users/breddy-lgamerica/repos", "events_url": "https://api.github.com/users/breddy-lgamerica/events{/privacy}", "received_events_url": "https://api.github.com/users/breddy-lgamerica/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-12-20T17:18:28
2025-01-13T01:43:48
2025-01-13T01:43:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? We are running ollama container in a kubernetes cluster in azure using Standard_NC24ads_A100_v4 and running a model mllama llama3.2-vision:11b , we keep getting the error - mllama doesn't support parallel requests yet How do we fix this ### OS Linux, Docker ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.5.4-0-g2ddc32d-dirty
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8186/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2678
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2678/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2678/comments
https://api.github.com/repos/ollama/ollama/issues/2678/events
https://github.com/ollama/ollama/issues/2678
2,149,095,107
I_kwDOJ0Z1Ps6AGJbD
2,678
Understanding Modelfile template with respect to conversational history
{ "login": "nikhil0360", "id": 43106856, "node_id": "MDQ6VXNlcjQzMTA2ODU2", "avatar_url": "https://avatars.githubusercontent.com/u/43106856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nikhil0360", "html_url": "https://github.com/nikhil0360", "followers_url": "https://api.github.com/users/nikhil0360/followers", "following_url": "https://api.github.com/users/nikhil0360/following{/other_user}", "gists_url": "https://api.github.com/users/nikhil0360/gists{/gist_id}", "starred_url": "https://api.github.com/users/nikhil0360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nikhil0360/subscriptions", "organizations_url": "https://api.github.com/users/nikhil0360/orgs", "repos_url": "https://api.github.com/users/nikhil0360/repos", "events_url": "https://api.github.com/users/nikhil0360/events{/privacy}", "received_events_url": "https://api.github.com/users/nikhil0360/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-02-22T13:36:39
2024-03-17T17:57:03
2024-03-17T17:57:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
[proposed Label] question Hello, I want to understand how does the conversational history is feed back into the template from the model file. for example, for llama2:chat ``` TEMPLATE """[INST] <<SYS>>{{ .System }}<</SYS>> {{ .Prompt }} [/INST] """ ``` I am able to do conversational question answering on the terminal, but I am not sure how does the template take care of the history. On the contrary, how do I disable such behaviour? I want to run a fine-tune model which just answer to user based on the current question. I don't want it to get influenced by history/previous question. I tried using `/set nohistory` but it doesn't seem to work (for llama2) Please help. I tried looking in docs, online forms, github issues. Was unable to find any solutions.
{ "login": "nikhil0360", "id": 43106856, "node_id": "MDQ6VXNlcjQzMTA2ODU2", "avatar_url": "https://avatars.githubusercontent.com/u/43106856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nikhil0360", "html_url": "https://github.com/nikhil0360", "followers_url": "https://api.github.com/users/nikhil0360/followers", "following_url": "https://api.github.com/users/nikhil0360/following{/other_user}", "gists_url": "https://api.github.com/users/nikhil0360/gists{/gist_id}", "starred_url": "https://api.github.com/users/nikhil0360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nikhil0360/subscriptions", "organizations_url": "https://api.github.com/users/nikhil0360/orgs", "repos_url": "https://api.github.com/users/nikhil0360/repos", "events_url": "https://api.github.com/users/nikhil0360/events{/privacy}", "received_events_url": "https://api.github.com/users/nikhil0360/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2678/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3790
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3790/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3790/comments
https://api.github.com/repos/ollama/ollama/issues/3790/events
https://github.com/ollama/ollama/pull/3790
2,254,874,782
PR_kwDOJ0Z1Ps5tQ7bN
3,790
use vanity imports
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-04-21T03:04:46
2024-04-21T03:30:17
2024-04-21T03:30:17
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3790", "html_url": "https://github.com/ollama/ollama/pull/3790", "diff_url": "https://github.com/ollama/ollama/pull/3790.diff", "patch_url": "https://github.com/ollama/ollama/pull/3790.patch", "merged_at": null }
null
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3790/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8664
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8664/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8664/comments
https://api.github.com/repos/ollama/ollama/issues/8664/events
https://github.com/ollama/ollama/issues/8664
2,818,425,629
I_kwDOJ0Z1Ps6n_cMd
8,664
Wrong GPU size calculation for the `command-r7b:7b` model
{ "login": "vvidovic", "id": 3177210, "node_id": "MDQ6VXNlcjMxNzcyMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/3177210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vvidovic", "html_url": "https://github.com/vvidovic", "followers_url": "https://api.github.com/users/vvidovic/followers", "following_url": "https://api.github.com/users/vvidovic/following{/other_user}", "gists_url": "https://api.github.com/users/vvidovic/gists{/gist_id}", "starred_url": "https://api.github.com/users/vvidovic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vvidovic/subscriptions", "organizations_url": "https://api.github.com/users/vvidovic/orgs", "repos_url": "https://api.github.com/users/vvidovic/repos", "events_url": "https://api.github.com/users/vvidovic/events{/privacy}", "received_events_url": "https://api.github.com/users/vvidovic/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6849881759, "node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw", "url": "https://api.github.com/repos/ollama/ollama/labels/memory", "name": "memory", "color": "5017EA", "default": false, "description": "" } ]
open
false
null
[]
null
4
2025-01-29T14:44:48
2025-01-30T07:47:04
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I wasn't able to run `command-r7b:7b` model while all other larger models were running successfully. After some investigation and trial and error, I realized I could fix this issue by creating a new model that would offload fewer model layers to GPU. Initial state: ``` $ nvidia-smi Wed Jan 29 15:33:17 2025 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA RTX A1000 Laptop GPU Off | 00000000:01:00.0 On | N/A | | N/A 56C P3 6W / 35W | 149MiB / 4096MiB | 16% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 4937 G /usr/lib/xorg/Xorg 143MiB | +---------------------------------------------------------------------------------------+ ``` Running model, error produced: ``` $ ollama run command-r7b:7b Error: llama runner process has terminated: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1531936768 llama_new_context_with_model: failed to allocate compute buffers ``` A new model with fewer layers was created using the following modelfile: ``` # ollama create command-r7b-v:7b -f command-r7.modelfile FROM command-r7b:7b PARAMETER num_gpu 17 ``` Successfully running newly created model: ``` $ ollama run command-r7b-v:7b >>> /bye ``` Log information for error and success cases produced by `journal -S today` is attached. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8664/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8664/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6199
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6199/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6199/comments
https://api.github.com/repos/ollama/ollama/issues/6199/events
https://github.com/ollama/ollama/issues/6199
2,450,769,631
I_kwDOJ0Z1Ps6SE8bf
6,199
Ollama crashes with Deepseek-Coder-V2-Lite-Instruct
{ "login": "shockme", "id": 470676, "node_id": "MDQ6VXNlcjQ3MDY3Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/470676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shockme", "html_url": "https://github.com/shockme", "followers_url": "https://api.github.com/users/shockme/followers", "following_url": "https://api.github.com/users/shockme/following{/other_user}", "gists_url": "https://api.github.com/users/shockme/gists{/gist_id}", "starred_url": "https://api.github.com/users/shockme/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shockme/subscriptions", "organizations_url": "https://api.github.com/users/shockme/orgs", "repos_url": "https://api.github.com/users/shockme/repos", "events_url": "https://api.github.com/users/shockme/events{/privacy}", "received_events_url": "https://api.github.com/users/shockme/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-08-06T12:31:24
2024-10-31T18:19:26
2024-10-31T18:19:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The output is cut in the middle of generation. Here's the log: ``` Aug 06 15:10:46 user-desktop systemd[4465]: Started Ollama Service. Aug 06 15:10:46 user-desktop ollama[13639]: 2024/08/06 15:10:46 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11435 OL LAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/user/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:3 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" Aug 06 15:10:46 user-desktop ollama[13639]: time=2024-08-06T15:10:46.362+03:00 level=INFO source=images.go:781 msg="total blobs: 92" Aug 06 15:10:46 user-desktop ollama[13639]: time=2024-08-06T15:10:46.365+03:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" Aug 06 15:10:46 user-desktop ollama[13639]: time=2024-08-06T15:10:46.366+03:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11435 (version 0.3.3)" Aug 06 15:10:46 user-desktop ollama[13639]: time=2024-08-06T15:10:46.367+03:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama198573251/runners Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.148+03:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]" Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.148+03:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.472+03:00 level=WARN source=amd_linux.go:59 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.472+03:00 level=INFO source=amd_linux.go:360 msg="no compatible amdgpu devices detected" Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.472+03:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-5968b8f6-eb32-e5b2-37a0-2a8637c2ae09 library=cuda compute=6.1 driver=12.2 name="Tesla P40" total="23.9 GiB" available="23.7 GiB" Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.757+03:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/home/user/.ollama/models/blobs/sha256-373dcfc92e01372709b6164fc836f677a6280e25e9eac5c434c64223207bfc4f gpu=GPU-5968b8f6-eb32-e5b2-37a0-2a8637c2ae09 parallel=3 available=25430458368 required="17.7 GiB" Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.759+03:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=28 layers.offload=28 layers.split="" memory.available="[23.7 GiB]" memory.required.full="17.7 GiB" memory.required.partial="17.7 GiB" memory.required.kv="1.6 GiB" memory.required.allocations="[17.7 GiB]" memory.weights.total="16.7 GiB" memory.weights.repeating="16.5 GiB" memory.weights.nonrepeating="212.5 MiB" memory.graph.full="228.0 MiB" memory.graph.partial="376.1 MiB" Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.760+03:00 level=INFO source=server.go:384 msg="starting llama server" cmd="/tmp/ollama198573251/runners/cuda_v11/ollama_llama_server --model /home/user/.ollama/models/blobs/sha256-373dcfc92e01372709b6164fc836f677a6280e25e9eac5c434c64223207bfc4f --ctx-size 6144 --batch-size 512 --embedding --log-disable --n-gpu-layers 28 --parallel 3 --port 40025" Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.760+03:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.760+03:00 level=INFO source=server.go:584 msg="waiting for llama runner to start responding" Aug 06 15:10:53 user-desktop ollama[13639]: time=2024-08-06T15:10:53.761+03:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server error" Aug 06 15:10:53 user-desktop ollama[13757]: INFO [main] build info | build=1 commit="6eeaeba" tid="140158488453120" timestamp=1722946253 Aug 06 15:10:53 user-desktop ollama[13757]: INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140158488453120" timestamp=1722946253 total_threads=32 Aug 06 15:10:53 user-desktop ollama[13757]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="40025" tid="140158488453120" timestamp=1722946253 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: loaded meta data with 38 key-value pairs and 377 tensors from /home/user/.ollama/models/blobs/sha256-373dcfc92e01372709b6164fc836f677a6280e25e9eac5c434c64223207bfc4f (version GGUF V3 (latest)) Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 0: general.architecture str = deepseek2 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 1: general.name str = DeepSeek-Coder-V2-Lite-Instruct Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 2: deepseek2.block_count u32 = 27 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 3: deepseek2.context_length u32 = 163840 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 4: deepseek2.embedding_length u32 = 2048 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 5: deepseek2.feed_forward_length u32 = 10944 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 6: deepseek2.attention.head_count u32 = 16 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 7: deepseek2.attention.head_count_kv u32 = 16 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 8: deepseek2.rope.freq_base f32 = 10000.000000 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 9: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 10: deepseek2.expert_used_count u32 = 6 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 11: general.file_type u32 = 7 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 1 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 102400 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 14: deepseek2.attention.kv_lora_rank u32 = 512 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 15: deepseek2.attention.key_length u32 = 192 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 16: deepseek2.attention.value_length u32 = 128 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 17: deepseek2.expert_feed_forward_length u32 = 1408 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 18: deepseek2.expert_count u32 = 64 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 19: deepseek2.expert_shared_count u32 = 2 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 20: deepseek2.expert_weights_scale f32 = 1.000000 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 21: deepseek2.rope.dimension_count u32 = 64 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 22: deepseek2.rope.scaling.type str = yarn Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 23: deepseek2.rope.scaling.factor f32 = 40.000000 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 24: deepseek2.rope.scaling.original_context_length u32 = 4096 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 25: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.070700 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 27: tokenizer.ggml.pre str = deepseek-llm Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ... Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 100000 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 100001 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 100001 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 35: tokenizer.ggml.add_eos_token bool = false Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 36: tokenizer.chat_template str = {% if not add_generation_prompt is de... Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - kv 37: general.quantization_version u32 = 2 Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - type f32: 108 tensors Aug 06 15:10:53 user-desktop ollama[13639]: llama_model_loader: - type q8_0: 269 tensors Aug 06 15:10:54 user-desktop ollama[13639]: time=2024-08-06T15:10:54.014+03:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server loading model" Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_vocab: special tokens cache size = 2400 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_vocab: token to piece cache size = 0.6661 MB Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: format = GGUF V3 (latest) Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: arch = deepseek2 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: vocab type = BPE Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_vocab = 102400 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_merges = 99757 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: vocab_only = 0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_ctx_train = 163840 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_embd = 2048 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_layer = 27 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_head = 16 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_head_kv = 16 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_rot = 64 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_swa = 0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_embd_head_k = 192 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_embd_head_v = 128 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_gqa = 1 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_embd_k_gqa = 3072 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_embd_v_gqa = 2048 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: f_norm_eps = 0.0e+00 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: f_logit_scale = 0.0e+00 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_ff = 10944 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_expert = 64 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_expert_used = 6 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: causal attn = 1 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: pooling type = 0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: rope type = 0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: rope scaling = yarn Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: freq_base_train = 10000.0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: freq_scale_train = 0.025 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_ctx_orig_yarn = 4096 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: rope_finetuned = unknown Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: ssm_d_conv = 0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: ssm_d_inner = 0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: ssm_d_state = 0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: ssm_dt_rank = 0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: model type = 16B Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: model ftype = Q8_0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: model params = 15.71 B Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: model size = 15.55 GiB (8.51 BPW) Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: general.name = DeepSeek-Coder-V2-Lite-Instruct Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>' Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: EOS token = 100001 '<|end▁of▁sentence|>' Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>' Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: LF token = 126 'Ä' Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: max token length = 256 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_layer_dense_lead = 1 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_lora_q = 0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_lora_kv = 512 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_ff_exp = 1408 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: n_expert_shared = 2 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: expert_weights_scale = 1.0 Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_print_meta: rope_yarn_log_mul = 0.0707 Aug 06 15:10:54 user-desktop ollama[13639]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Aug 06 15:10:54 user-desktop ollama[13639]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Aug 06 15:10:54 user-desktop ollama[13639]: ggml_cuda_init: found 1 CUDA devices: Aug 06 15:10:54 user-desktop ollama[13639]: Device 0: Tesla P40, compute capability 6.1, VMM: yes Aug 06 15:10:54 user-desktop ollama[13639]: llm_load_tensors: ggml ctx size = 0.32 MiB Aug 06 15:12:43 user-desktop ollama[13639]: llm_load_tensors: offloading 27 repeating layers to GPU Aug 06 15:12:43 user-desktop ollama[13639]: llm_load_tensors: offloading non-repeating layers to GPU Aug 06 15:12:43 user-desktop ollama[13639]: llm_load_tensors: offloaded 28/28 layers to GPU Aug 06 15:12:43 user-desktop ollama[13639]: llm_load_tensors: CPU buffer size = 212.50 MiB Aug 06 15:12:43 user-desktop ollama[13639]: llm_load_tensors: CUDA0 buffer size = 15712.47 MiB Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: n_ctx = 6144 Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: n_batch = 512 Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: n_ubatch = 512 Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: flash_attn = 0 Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: freq_base = 10000.0 Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: freq_scale = 0.025 Aug 06 15:12:52 user-desktop ollama[13639]: llama_kv_cache_init: CUDA0 KV buffer size = 1620.00 MiB Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: KV self size = 1620.00 MiB, K (f16): 972.00 MiB, V (f16): 648.00 MiB Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: CUDA_Host output buffer size = 1.20 MiB Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: CUDA0 compute buffer size = 228.00 MiB Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: CUDA_Host compute buffer size = 16.01 MiB Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: graph nodes = 1924 Aug 06 15:12:52 user-desktop ollama[13639]: llama_new_context_with_model: graph splits = 2 Aug 06 15:12:52 user-desktop ollama[13757]: INFO [main] model loaded | tid="140158488453120" timestamp=1722946372 Aug 06 15:12:52 user-desktop ollama[13639]: time=2024-08-06T15:12:52.887+03:00 level=INFO source=server.go:623 msg="llama runner started in 119.13 seconds" Aug 06 15:12:58 user-desktop ollama[13639]: /go/src/github.com/ollama/ollama/llm/llama.cpp/src/llama.cpp:15093: Deepseek2 does not support K-shift Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13758] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13759] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13760] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13761] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13762] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13763] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13764] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13765] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13766] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13767] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13768] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13769] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13770] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13771] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13772] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13773] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13774] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13775] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13776] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13777] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13778] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13779] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13780] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13781] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13782] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13783] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13784] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13785] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13786] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13787] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13788] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13789] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13790] Aug 06 15:12:58 user-desktop ollama[16692]: [New LWP 13791] Aug 06 15:12:59 user-desktop ollama[16692]: [Thread debugging using libthread_db enabled] Aug 06 15:12:59 user-desktop ollama[16692]: Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Aug 06 15:12:59 user-desktop ollama[16692]: 0x00007f791682842f in __GI___wait4 (pid=16692, stat_loc=0x7ffeda1838e4, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30 Aug 06 15:12:59 user-desktop ollama[13639]: 30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory. Aug 06 15:12:59 user-desktop ollama[16692]: #0 0x00007f791682842f in __GI___wait4 (pid=16692, stat_loc=0x7ffeda1838e4, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30 Aug 06 15:12:59 user-desktop ollama[16692]: 30 in ../sysdeps/unix/sysv/linux/wait4.c Aug 06 15:12:59 user-desktop ollama[16692]: #1 0x0000000000517028 in ggml_abort () Aug 06 15:12:59 user-desktop ollama[16692]: #2 0x0000000000779741 in llama_kv_cache_update () Aug 06 15:12:59 user-desktop ollama[16692]: #3 0x0000000000783804 in llama_decode () Aug 06 15:12:59 user-desktop ollama[16692]: #4 0x00000000004ba577 in llama_server_context::update_slots() () Aug 06 15:12:59 user-desktop ollama[16692]: #5 0x000000000048f437 in llama_server_queue::start_loop() () Aug 06 15:12:59 user-desktop ollama[16692]: #6 0x000000000043bd69 in main () Aug 06 15:12:59 user-desktop ollama[16692]: [Inferior 1 (process 13757) detached] Aug 06 15:12:59 user-desktop ollama[13639]: [GIN] 2024/08/06 - 15:12:59 | 200 | 2m6s | 127.0.0.1 | POST "/api/chat" Aug 06 15:18:04 user-desktop ollama[13639]: time=2024-08-06T15:18:04.684+03:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.067024501 model=/home/user/.ollama/models/blobs/sha256-373dcfc92e01372709b6164fc836f677a6280e25e9eac5c434c64223207bfc4f Aug 06 15:18:04 user-desktop ollama[13639]: time=2024-08-06T15:18:04.920+03:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.30378812 model=/home/user/.ollama/models/blobs/sha256-373dcfc92e01372709b6164fc836f677a6280e25e9eac5c434c64223207bfc4f Aug 06 15:18:05 user-desktop ollama[13639]: time=2024-08-06T15:18:05.173+03:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.556641741 model=/home/user/.ollama/models/blobs/sha256-373dcfc92e01372709b6164fc836f677a6280e25e9eac5c434c64223207bfc4f ``` This is also reproducible with 0.2.8. Let me know what other info you need. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.3
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6199/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4839
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4839/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4839/comments
https://api.github.com/repos/ollama/ollama/issues/4839/events
https://github.com/ollama/ollama/issues/4839
2,336,357,770
I_kwDOJ0Z1Ps6LQf2K
4,839
/api/list shows Start of CE 'expires_at'
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
0
2024-06-05T16:31:56
2024-06-05T18:19:25
2024-06-05T18:19:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Should not return the field ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4839/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4994
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4994/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4994/comments
https://api.github.com/repos/ollama/ollama/issues/4994/events
https://github.com/ollama/ollama/issues/4994
2,347,973,132
I_kwDOJ0Z1Ps6L8zoM
4,994
support for recurrent gemma
{ "login": "olumolu", "id": 162728301, "node_id": "U_kgDOCbMJbQ", "avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olumolu", "html_url": "https://github.com/olumolu", "followers_url": "https://api.github.com/users/olumolu/followers", "following_url": "https://api.github.com/users/olumolu/following{/other_user}", "gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}", "starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/olumolu/subscriptions", "organizations_url": "https://api.github.com/users/olumolu/orgs", "repos_url": "https://api.github.com/users/olumolu/repos", "events_url": "https://api.github.com/users/olumolu/events{/privacy}", "received_events_url": "https://api.github.com/users/olumolu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
3
2024-06-12T06:58:20
2024-06-27T18:26:32
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/google/recurrentgemma-2b-it Support for recurrent gemma
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4994/reactions", "total_count": 4, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4994/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2404
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2404/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2404/comments
https://api.github.com/repos/ollama/ollama/issues/2404/events
https://github.com/ollama/ollama/pull/2404
2,124,244,788
PR_kwDOJ0Z1Ps5mU6oE
2,404
Add GBNF grammar support
{ "login": "jquesnelle", "id": 687076, "node_id": "MDQ6VXNlcjY4NzA3Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/687076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jquesnelle", "html_url": "https://github.com/jquesnelle", "followers_url": "https://api.github.com/users/jquesnelle/followers", "following_url": "https://api.github.com/users/jquesnelle/following{/other_user}", "gists_url": "https://api.github.com/users/jquesnelle/gists{/gist_id}", "starred_url": "https://api.github.com/users/jquesnelle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jquesnelle/subscriptions", "organizations_url": "https://api.github.com/users/jquesnelle/orgs", "repos_url": "https://api.github.com/users/jquesnelle/repos", "events_url": "https://api.github.com/users/jquesnelle/events{/privacy}", "received_events_url": "https://api.github.com/users/jquesnelle/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
10
2024-02-08T02:28:09
2024-12-05T00:42:25
2024-12-05T00:42:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2404", "html_url": "https://github.com/ollama/ollama/pull/2404", "diff_url": "https://github.com/ollama/ollama/pull/2404.diff", "patch_url": "https://github.com/ollama/ollama/pull/2404.patch", "merged_at": null }
This is an updated version of #1606 that accounts for changes to the code since it was originally submitted. Adds support for llama.cpp's GBNF grammars, which enable very specific steering of model outputs. This feature is already used on the backend by when the `format` option is set to `json`, but this allows any arbitrary grammar to be passed in. In the case where both `grammar` and `format` are specified, the `format` is preferred.
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2404/reactions", "total_count": 37, "+1": 28, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 9, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2404/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8272
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8272/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8272/comments
https://api.github.com/repos/ollama/ollama/issues/8272/events
https://github.com/ollama/ollama/issues/8272
2,764,069,808
I_kwDOJ0Z1Ps6kwFuw
8,272
Ollama models give low inference with Continue extension on VS Code Community Edition.
{ "login": "ENUMERA8OR", "id": 65213780, "node_id": "MDQ6VXNlcjY1MjEzNzgw", "avatar_url": "https://avatars.githubusercontent.com/u/65213780?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ENUMERA8OR", "html_url": "https://github.com/ENUMERA8OR", "followers_url": "https://api.github.com/users/ENUMERA8OR/followers", "following_url": "https://api.github.com/users/ENUMERA8OR/following{/other_user}", "gists_url": "https://api.github.com/users/ENUMERA8OR/gists{/gist_id}", "starred_url": "https://api.github.com/users/ENUMERA8OR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ENUMERA8OR/subscriptions", "organizations_url": "https://api.github.com/users/ENUMERA8OR/orgs", "repos_url": "https://api.github.com/users/ENUMERA8OR/repos", "events_url": "https://api.github.com/users/ENUMERA8OR/events{/privacy}", "received_events_url": "https://api.github.com/users/ENUMERA8OR/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-12-31T07:45:17
2025-01-13T01:49:18
2025-01-13T01:49:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please tell me how to troubleshoot shoot this issue. I want to increase the model inference on vs code. Any suggestionss would be helpful.
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8272/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6646
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6646/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6646/comments
https://api.github.com/repos/ollama/ollama/issues/6646/events
https://github.com/ollama/ollama/issues/6646
2,506,522,708
I_kwDOJ0Z1Ps6VZoBU
6,646
POST /v1/chat/completions returns 404 not 400 for model not found
{ "login": "codefromthecrypt", "id": 64215, "node_id": "MDQ6VXNlcjY0MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/64215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codefromthecrypt", "html_url": "https://github.com/codefromthecrypt", "followers_url": "https://api.github.com/users/codefromthecrypt/followers", "following_url": "https://api.github.com/users/codefromthecrypt/following{/other_user}", "gists_url": "https://api.github.com/users/codefromthecrypt/gists{/gist_id}", "starred_url": "https://api.github.com/users/codefromthecrypt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codefromthecrypt/subscriptions", "organizations_url": "https://api.github.com/users/codefromthecrypt/orgs", "repos_url": "https://api.github.com/users/codefromthecrypt/repos", "events_url": "https://api.github.com/users/codefromthecrypt/events{/privacy}", "received_events_url": "https://api.github.com/users/codefromthecrypt/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-09-05T00:22:03
2024-09-09T22:21:47
2024-09-09T22:21:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? POST /v1/chat/completions returns 404 not 400 for model not found. Semantically, the better code here is 400, as it is an invalid argument on a correct route. Using 404 messages on a route that exists is confusing and had me doubting if the routes were mounted or not. This seems to be the same for other routes. Basically, I suggest any code that looks like this: ```go c.JSON(http.StatusNotFound, gin.H{"error": fmt.Sprintf("model '%s' not found", req.Model)}) ``` switch it to bad request ```go c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("model '%s' not found", req.Model)}) ``` That way, we know the parameter was wrong vs not being able to find the route. I can raise a PR if you like ### OS Linux ### GPU Apple ### CPU _No response_ ### Ollama version 0.3.9
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6646/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1026
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1026/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1026/comments
https://api.github.com/repos/ollama/ollama/issues/1026/events
https://github.com/ollama/ollama/pull/1026
1,980,680,803
PR_kwDOJ0Z1Ps5exUCq
1,026
Update client.py
{ "login": "eltociear", "id": 22633385, "node_id": "MDQ6VXNlcjIyNjMzMzg1", "avatar_url": "https://avatars.githubusercontent.com/u/22633385?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eltociear", "html_url": "https://github.com/eltociear", "followers_url": "https://api.github.com/users/eltociear/followers", "following_url": "https://api.github.com/users/eltociear/following{/other_user}", "gists_url": "https://api.github.com/users/eltociear/gists{/gist_id}", "starred_url": "https://api.github.com/users/eltociear/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eltociear/subscriptions", "organizations_url": "https://api.github.com/users/eltociear/orgs", "repos_url": "https://api.github.com/users/eltociear/repos", "events_url": "https://api.github.com/users/eltociear/events{/privacy}", "received_events_url": "https://api.github.com/users/eltociear/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-11-07T06:59:18
2023-11-07T17:55:47
2023-11-07T17:55:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1026", "html_url": "https://github.com/ollama/ollama/pull/1026", "diff_url": "https://github.com/ollama/ollama/pull/1026.diff", "patch_url": "https://github.com/ollama/ollama/pull/1026.patch", "merged_at": "2023-11-07T17:55:47" }
recieve -> receive
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1026/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2204
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2204/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2204/comments
https://api.github.com/repos/ollama/ollama/issues/2204/events
https://github.com/ollama/ollama/issues/2204
2,101,882,474
I_kwDOJ0Z1Ps59SC5q
2,204
Questions about context size
{ "login": "swip3798", "id": 33018263, "node_id": "MDQ6VXNlcjMzMDE4MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/33018263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/swip3798", "html_url": "https://github.com/swip3798", "followers_url": "https://api.github.com/users/swip3798/followers", "following_url": "https://api.github.com/users/swip3798/following{/other_user}", "gists_url": "https://api.github.com/users/swip3798/gists{/gist_id}", "starred_url": "https://api.github.com/users/swip3798/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/swip3798/subscriptions", "organizations_url": "https://api.github.com/users/swip3798/orgs", "repos_url": "https://api.github.com/users/swip3798/repos", "events_url": "https://api.github.com/users/swip3798/events{/privacy}", "received_events_url": "https://api.github.com/users/swip3798/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
9
2024-01-26T09:30:45
2024-12-16T06:37:58
2024-05-10T01:06:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Before I start, thank you for this amazing project! It's really great to run LLMs on my own hardware this easily. I am currently building a small story writing application that uses ollama to have a "cowriter" AI, that will write along with the user, similar to how AIDungeon or NovelAI work. Since the stories have no limit in size, they will eventually become large than the context size of the model. This now has led me to multiple questions on how exactly ollama handles cases, where the prompt is larger than the context size of the chosen model. Will it get trimmed, and if yes how exactly? Is the template always in the context and just the prompt trimmed, or will it be cut off too? Or do I understand this completely wrong? Additionally the users of my app should be able to add a "long term memory", essentially just more text that will be put at the beginning of the prompt, so that the AI can have info of the story that is already outside of the context size. That of course makes it necessary, that this memory text will definitely be in the context of the model. Now, all of this would be fairly simple to implement myself, if there would be a tokenize/detokenize endpoint. I have seen the issues regarding that, so maybe this can also be achieved using the chat endpoint? But then again, what happens when the context size is exceeded? Sorry for all those questions at once, I would be really thankful, if you could share some insights on how this works.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2204/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8594
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8594/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8594/comments
https://api.github.com/repos/ollama/ollama/issues/8594/events
https://github.com/ollama/ollama/issues/8594
2,811,633,348
I_kwDOJ0Z1Ps6nlh7E
8,594
Ollama stops accessing GPU and Reverts to CPU after runing for extended periods
{ "login": "loca5790", "id": 96643826, "node_id": "U_kgDOBcKq8g", "avatar_url": "https://avatars.githubusercontent.com/u/96643826?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loca5790", "html_url": "https://github.com/loca5790", "followers_url": "https://api.github.com/users/loca5790/followers", "following_url": "https://api.github.com/users/loca5790/following{/other_user}", "gists_url": "https://api.github.com/users/loca5790/gists{/gist_id}", "starred_url": "https://api.github.com/users/loca5790/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loca5790/subscriptions", "organizations_url": "https://api.github.com/users/loca5790/orgs", "repos_url": "https://api.github.com/users/loca5790/repos", "events_url": "https://api.github.com/users/loca5790/events{/privacy}", "received_events_url": "https://api.github.com/users/loca5790/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2025-01-26T15:52:00
2025-01-27T16:01:22
2025-01-27T15:55:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have ollama set to be persistent in my VRAM based off my homeassistant usage. I moved to an RTX3090 and after sometimes 12 hours and other times a day plus Ollama will stop using the GPU and revert to CPU only. It then gets stuck spooling the CPU up for hours at a time without generating any response. System is: Ryzen 5700G 64GB Ram RTX3090 Ollama is running via a docker compose: ``` services: ollama: volumes: - ollama:/root/.ollama container_name: ollama pull_policy: if_not_present tty: true restart: unless-stopped image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest} ports: - ${OLLAMA_WEBAPI_PORT-11434}:11434 deploy: resources: reservations: devices: - driver: nvidia capabilities: [gpu, compute, utility] #["gpu"] count: all environment: - OLLAMA_DEBUG=1 - CUDA_VISIBLE_DEVICES=0 # Force use of the GPU open-webui: build: context: . args: OLLAMA_BASE_URL: '/ollama' dockerfile: Dockerfile image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main} container_name: open-webui volumes: - open-webui:/app/backend/data depends_on: - ollama ports: - ${OPEN_WEBUI_PORT-3000}:8080 environment: - 'OLLAMA_BASE_URL=http://ollama:11434' - 'WEBUI_SECRET_KEY=' extra_hosts: - host.docker.internal:host-gateway restart: unless-stopped volumes: ollama: {} open-webui: {} ``` I tried adding the CUDA_VISIBLE_DEVICES to force use of GPU. A restart of the container will bring it back up and load it back into GPU. I have tried to stress test it by running multiple parallel conversation agents without any issues dropping the GPU. There are no instances in the log that I can find where the GPU becomes unavailable or anything in debug. The only thing that alerts me in the log it has dropped the GPU is that on a request it will load the model and reference CPU. It could be a me thing, but spent a few days now without luck. I've had this happen on two machines now. Second machine this happened on: Same docker compose setup running in a VM on ubuntu server. RTX3060 running llava-phi3 as the model and not persistent only as requested ### OS Docker, Linux ### GPU Nvidia ### CPU Intel, AMD ### Ollama version 0.5.4
{ "login": "loca5790", "id": 96643826, "node_id": "U_kgDOBcKq8g", "avatar_url": "https://avatars.githubusercontent.com/u/96643826?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loca5790", "html_url": "https://github.com/loca5790", "followers_url": "https://api.github.com/users/loca5790/followers", "following_url": "https://api.github.com/users/loca5790/following{/other_user}", "gists_url": "https://api.github.com/users/loca5790/gists{/gist_id}", "starred_url": "https://api.github.com/users/loca5790/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loca5790/subscriptions", "organizations_url": "https://api.github.com/users/loca5790/orgs", "repos_url": "https://api.github.com/users/loca5790/repos", "events_url": "https://api.github.com/users/loca5790/events{/privacy}", "received_events_url": "https://api.github.com/users/loca5790/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8594/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5281
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5281/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5281/comments
https://api.github.com/repos/ollama/ollama/issues/5281/events
https://github.com/ollama/ollama/issues/5281
2,373,571,955
I_kwDOJ0Z1Ps6NedVz
5,281
update /show to work like command line show
{ "login": "iplayfast", "id": 751306, "node_id": "MDQ6VXNlcjc1MTMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iplayfast", "html_url": "https://github.com/iplayfast", "followers_url": "https://api.github.com/users/iplayfast/followers", "following_url": "https://api.github.com/users/iplayfast/following{/other_user}", "gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}", "starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions", "organizations_url": "https://api.github.com/users/iplayfast/orgs", "repos_url": "https://api.github.com/users/iplayfast/repos", "events_url": "https://api.github.com/users/iplayfast/events{/privacy}", "received_events_url": "https://api.github.com/users/iplayfast/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-06-25T20:07:58
2024-06-28T20:15:53
2024-06-28T20:15:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I really like the new ``` ollama show <model> ``` feature. when running ollama from command line or url it would be nice to be able to get the same type of info without actually loading the model and requesting all the individual sections. Currently ``` >>> /show Available Commands: /show info Show details for this model /show license Show model license /show modelfile Show Modelfile for this model /show parameters Show parameters for this model /show system Show system message /show template Show prompt template ``` I'm thinking something like ``` /show model <model> ``` for example would show the same as command line ``` ollama show llama3 Model arch llama parameters 8.0B quantization Q4_0 context length 8192 embedding length 4096 Parameters num_keep 24 stop "<|start_header_id|>" stop "<|end_header_id|>" stop "<|eot_id|>" License META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 ``` **OR** /show model_json <model> which would show the same thing in json format.
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5281/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3577
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3577/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3577/comments
https://api.github.com/repos/ollama/ollama/issues/3577/events
https://github.com/ollama/ollama/issues/3577
2,235,580,063
I_kwDOJ0Z1Ps6FQD6f
3,577
error when run command r plus
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/taozhiyuai/followers", "following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}", "gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}", "starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions", "organizations_url": "https://api.github.com/users/taozhiyuai/orgs", "repos_url": "https://api.github.com/users/taozhiyuai/repos", "events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}", "received_events_url": "https://api.github.com/users/taozhiyuai/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-04-10T13:17:09
2024-04-12T19:14:03
2024-04-12T19:14:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? import command r plus gguf successfully. but error when run it. <img width="837" alt="截屏2024-04-10 21 14 08" src="https://github.com/ollama/ollama/assets/146583103/37fbd1d2-7b75-432b-be86-af2c31b414d7"> ### What did you expect to see? _No response_ ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS macOS ### Architecture _No response_ ### Platform _No response_ ### Ollama version ollama version is 0.1.31 ### GPU Apple ### GPU info _No response_ ### CPU Apple ### Other software _No response_
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3577/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/3577/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1577
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1577/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1577/comments
https://api.github.com/repos/ollama/ollama/issues/1577/events
https://github.com/ollama/ollama/issues/1577
2,046,062,096
I_kwDOJ0Z1Ps559G4Q
1,577
ValueError: Error raised by inference API HTTP code: 500, {"error":"failed to generate embedding"}
{ "login": "doanaktar", "id": 66390064, "node_id": "MDQ6VXNlcjY2MzkwMDY0", "avatar_url": "https://avatars.githubusercontent.com/u/66390064?v=4", "gravatar_id": "", "url": "https://api.github.com/users/doanaktar", "html_url": "https://github.com/doanaktar", "followers_url": "https://api.github.com/users/doanaktar/followers", "following_url": "https://api.github.com/users/doanaktar/following{/other_user}", "gists_url": "https://api.github.com/users/doanaktar/gists{/gist_id}", "starred_url": "https://api.github.com/users/doanaktar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/doanaktar/subscriptions", "organizations_url": "https://api.github.com/users/doanaktar/orgs", "repos_url": "https://api.github.com/users/doanaktar/repos", "events_url": "https://api.github.com/users/doanaktar/events{/privacy}", "received_events_url": "https://api.github.com/users/doanaktar/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" }, { "id": 5895046125, "node_id": "LA_kwDOJ0Z1Ps8AAAABX19D7Q", "url": "https://api.github.com/repos/ollama/ollama/labels/integration", "name": "integration", "color": "92E43A", "default": false, "description": "" } ]
closed
false
null
[]
null
11
2023-12-18T08:47:20
2024-06-30T18:05:59
2024-05-07T00:06:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When i'm trying ollama for document chat i get api error when it tries to create vectorstore. ```python from langchain.llms import Ollama from langchain.document_loaders import WebBaseLoader from langchain.embeddings import OllamaEmbeddings from langchain.vectorstores import Chroma from langchain.chains import RetrievalQA from langchain.text_splitter import RecursiveCharacterTextSplitter ollama = Ollama(base_url='http://localhost:11434', model="llama2") print(ollama("why is the sky blue")) loader = WebBaseLoader("https://www.gutenberg.org/files/1727/1727-h/1727-h.htm") data = loader.load() text_splitter=RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0) all_splits = text_splitter.split_documents(data) oembed = OllamaEmbeddings(base_url="http://localhost:11434", model="llama2") print("embedding: ", oembed) vectorstore = Chroma.from_documents(documents=all_splits, embedding=oembed) question="Who is Neleus and who is in Neleus' family?" docs = vectorstore.similarity_search(question) len(docs) qachain=RetrievalQA.from_chain_type(ollama, retriever=vectorstore.as_retriever()) qachain({"query": question}) ``` In my terminal the prints look like this: ```terminal The sky appears blue because of a phenomenon called Rayleigh scattering, which occurs when sunlight enters Earth's atmosphere. The sunlight encounters tiny molecules of gases such as nitrogen and oxygen in the air, which scatter the light in all directions. The shortest wavelengths of light, such as violet and blue, are scattered more than other colors, such as red and orange, because they have a shorter wave length. This means that these colors are dispersed throughout the atmosphere, giving the sky its blue appearance. Other factors can also affect the color of the sky, such as the presence of dust, water vapor, and pollutants in the air. For example, during sunrise and sunset, the sky may take on hues of red, orange, and pink due to the scattering of light by atmospheric particles. It's worth noting that the color of the sky can also vary depending on the observer's location and the time of day. For example, the sky may appear more blue or gray if you are near a body of water or in an area with a high level of air pollution. In summary, the sky appears blue because of the way light interacts with the tiny molecules of gases in the air, known as Rayleigh scattering. This phenomenon scatters shorter wavelengths of light, such as blue and violet, more than longer wavelengths, giving the sky its characteristic blue color. embedding: base_url='http://localhost:11434' model='llama2' embed_instruction='passage: ' query_instruction='query: ' mirostat=None mirostat_eta=None mirostat_tau=None num_ctx=None num_gpu=None num_thread=None repeat_last_n=None repeat_penalty=None temperature=None stop=None tfs_z=None top_k=None top_p=None model_kwargs=None Traceback (most recent call last): File "olama.py", line 23, in <module> vectorstore = Chroma.from_documents(documents=all_splits, embedding=oembed) File "./Env/chatbot/lib/python3.8/site-packages/langchain/vectorstores/chroma.py", line 771, in from_documents return cls.from_texts( File "./Env/chatbot/lib/python3.8/site-packages/langchain/vectorstores/chroma.py", line 729, in from_texts chroma_collection.add_texts( File "./Env/chatbot/lib/python3.8/site-packages/langchain/vectorstores/chroma.py", line 275, in add_texts embeddings = self._embedding_function.embed_documents(texts) File "./Env/chatbot/lib/python3.8/site-packages/langchain/embeddings/ollama.py", line 190, in embed_documents embeddings = self._embed(instruction_pairs) File "./Env/chatbot/lib/python3.8/site-packages/langchain/embeddings/ollama.py", line 175, in _embed embeddings = self._process_emb_response(prompt) File "./Env/chatbot/lib/python3.8/site-packages/langchain/embeddings/ollama.py", line 160, in _process_emb_response raise ValueError( ValueError: Error raised by inference API HTTP code: 500, {"error":"failed to generate embedding"} ``` And even i try to get embeddings via api ```terminal curl -X POST http://localhost:11434/api/embeddings -d '{ "model": "llama2", "prompt": "hello, how are you?" }' ``` My embeddings looks like this: ```terminal {"embedding":[0.713813066482544,-1.8217487335205078,0.48764699697494507,-0.8590573668479919,-0.7165888547897339,0.09285138547420502,-0.06305933743715286,0.8703135251998901,-0.7629101872444153,0.40360304713249207,1.6491974592208862,-0.7351164817810059,0.9032987356185913,1.0884538888931274,0.754738986492157,-0.0351458303630352,-0.2696535289287567,0.030562296509742737,-0.5470462441444397,-0.7511221170425415,1.6396052837371826,-2.254915714263916,-0.2600090503692627,-2.861804723739624,-0.31693896651268005,-1.0240172147750854,-0.7449401617050171,-2.3529539108276367,1.3769773244857788,-0.15259328484535217,1.564031958580017,0.33758652210235596,-1.2046291828155518,2.713618040084839,-0.22524534165859222,0.08619225025177002,0.5370852947235107,1.7855254411697388,-0.06518085300922394,-0.0628420040011406,0.5781055092811584,-4.234992504119873,-1.2907594442367554,-0.1936117261648178,1.3087000846862793,0.14808768033981323,-1.689073085784912,-3.103379011154175,0.5116130709648132,-1.674410104751587,0.026191502809524536,0.31160175800323486,1.843047022819519,-0.754763662815094,2.64321231842041,-0.9525578618049622,1.4135092496871948,1.014215350151062,1.701998233795166,0.35821205377578735,-4.310467720031738,1.3969742059707642,-0.3026293218135834,1.1710561513900757,-3.1511785984039307,-0.9500783681869507,0.25463706254959106,0.2536858320236206,0.2566526234149933,-0.8113981485366821,1.434630274772644,-0.41049930453300476,1.3408557176589966,0.3780902028083801,-2.971435546875,1.556931495666504,2.2950439453125,-2.0468714237213135,-1.6436930894851685,-0.6824514269828796,0.2970106601715088,0.48574963212013245,1.3311458826065063,-1.2823209762573242,-1.0872153043746948,1.0507322549819946,1.6134350299835205,0.44947174191474915,0.14205259084701538,-1.4551608562469482,-0.43960702419281006,-1.6097815036773682,-3.165051221847534,0.6144835352897644,1.2260065078735352,0.8544708490371704,-0.5544838905334473,-1.207687258720398,0.3186914324760437,-0.9924833178520203,0.48585525155067444,-0.987743616104126,-1.0047131776809692... ```
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1577/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/122
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/122/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/122/comments
https://api.github.com/repos/ollama/ollama/issues/122/events
https://github.com/ollama/ollama/issues/122
1,811,445,670
I_kwDOJ0Z1Ps5r-Hem
122
show ollama version
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
6
2023-07-19T08:32:07
2023-08-22T16:51:13
2023-08-22T16:51:12
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/122/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/122/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2144
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2144/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2144/comments
https://api.github.com/repos/ollama/ollama/issues/2144/events
https://github.com/ollama/ollama/pull/2144
2,094,699,870
PR_kwDOJ0Z1Ps5kw3Ls
2,144
faq: update to use launchctl setenv
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-01-22T20:31:42
2024-01-22T21:46:58
2024-01-22T21:46:57
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2144", "html_url": "https://github.com/ollama/ollama/pull/2144", "diff_url": "https://github.com/ollama/ollama/pull/2144.diff", "patch_url": "https://github.com/ollama/ollama/pull/2144.patch", "merged_at": "2024-01-22T21:46:57" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2144/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5068
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5068/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5068/comments
https://api.github.com/repos/ollama/ollama/issues/5068/events
https://github.com/ollama/ollama/issues/5068
2,355,092,563
I_kwDOJ0Z1Ps6MX9xT
5,068
please add nvidia/Nemotron-4-340B-Instruct
{ "login": "gileneusz", "id": 34601970, "node_id": "MDQ6VXNlcjM0NjAxOTcw", "avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gileneusz", "html_url": "https://github.com/gileneusz", "followers_url": "https://api.github.com/users/gileneusz/followers", "following_url": "https://api.github.com/users/gileneusz/following{/other_user}", "gists_url": "https://api.github.com/users/gileneusz/gists{/gist_id}", "starred_url": "https://api.github.com/users/gileneusz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gileneusz/subscriptions", "organizations_url": "https://api.github.com/users/gileneusz/orgs", "repos_url": "https://api.github.com/users/gileneusz/repos", "events_url": "https://api.github.com/users/gileneusz/events{/privacy}", "received_events_url": "https://api.github.com/users/gileneusz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
9
2024-06-15T18:10:10
2024-10-17T07:06:57
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
my GPUs are not utilized fully, I need to spin my H200s!! just kidding, need quantized version of the model ;)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5068/reactions", "total_count": 11, "+1": 8, "-1": 0, "laugh": 3, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5068/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8407
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8407/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8407/comments
https://api.github.com/repos/ollama/ollama/issues/8407/events
https://github.com/ollama/ollama/pull/8407
2,785,807,752
PR_kwDOJ0Z1Ps6Hogsg
8,407
convert/test: migrate conversion tests to work with refactor
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2025-01-14T00:29:20
2025-01-14T17:45:15
2025-01-14T17:45:15
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8407", "html_url": "https://github.com/ollama/ollama/pull/8407", "diff_url": "https://github.com/ollama/ollama/pull/8407.diff", "patch_url": "https://github.com/ollama/ollama/pull/8407.patch", "merged_at": null }
When conversion was refactored it broke all these tests, but they were silently skipping due to the wrong file names being checked for. This change refactors the valid model conversion test to highlight the important files and check for important details. Draft to see if people are ok with this approach before adding the rest of the model conversion tests. Related to: #4756
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8407/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8407/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7746
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7746/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7746/comments
https://api.github.com/repos/ollama/ollama/issues/7746/events
https://github.com/ollama/ollama/pull/7746
2,673,516,676
PR_kwDOJ0Z1Ps6CcNli
7,746
Add Community Integration (Update README.md)
{ "login": "gkamer8", "id": 10733401, "node_id": "MDQ6VXNlcjEwNzMzNDAx", "avatar_url": "https://avatars.githubusercontent.com/u/10733401?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkamer8", "html_url": "https://github.com/gkamer8", "followers_url": "https://api.github.com/users/gkamer8/followers", "following_url": "https://api.github.com/users/gkamer8/following{/other_user}", "gists_url": "https://api.github.com/users/gkamer8/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkamer8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkamer8/subscriptions", "organizations_url": "https://api.github.com/users/gkamer8/orgs", "repos_url": "https://api.github.com/users/gkamer8/repos", "events_url": "https://api.github.com/users/gkamer8/events{/privacy}", "received_events_url": "https://api.github.com/users/gkamer8/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-11-19T20:49:37
2024-11-20T05:37:15
2024-11-20T05:37:15
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7746", "html_url": "https://github.com/ollama/ollama/pull/7746", "diff_url": "https://github.com/ollama/ollama/pull/7746.diff", "patch_url": "https://github.com/ollama/ollama/pull/7746.patch", "merged_at": "2024-11-20T05:37:15" }
Added [Abbey](https://github.com/US-Artificial-Intelligence/abbey), an open source AI interface server, into community integrations
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7746/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2150
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2150/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2150/comments
https://api.github.com/repos/ollama/ollama/issues/2150/events
https://github.com/ollama/ollama/pull/2150
2,095,058,881
PR_kwDOJ0Z1Ps5kyGUL
2,150
Set a default version using git describe
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-01-23T01:12:49
2024-01-23T01:41:08
2024-01-23T01:38:27
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2150", "html_url": "https://github.com/ollama/ollama/pull/2150", "diff_url": "https://github.com/ollama/ollama/pull/2150.diff", "patch_url": "https://github.com/ollama/ollama/pull/2150.patch", "merged_at": "2024-01-23T01:38:27" }
If a VERSION is not specified, this will generate a version string that represents the state of the repo. For example `0.1.21-12-gffaf52e-dirty` representing 12 commits away from 0.1.21 tag, on commit gffaf52e and the tree is dirty.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2150/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/947
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/947/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/947/comments
https://api.github.com/repos/ollama/ollama/issues/947/events
https://github.com/ollama/ollama/issues/947
1,967,122,164
I_kwDOJ0Z1Ps51P-b0
947
ollama push username/UppercaseModelname fails with 401 error
{ "login": "easp", "id": 414705, "node_id": "MDQ6VXNlcjQxNDcwNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/414705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/easp", "html_url": "https://github.com/easp", "followers_url": "https://api.github.com/users/easp/followers", "following_url": "https://api.github.com/users/easp/following{/other_user}", "gists_url": "https://api.github.com/users/easp/gists{/gist_id}", "starred_url": "https://api.github.com/users/easp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/easp/subscriptions", "organizations_url": "https://api.github.com/users/easp/orgs", "repos_url": "https://api.github.com/users/easp/repos", "events_url": "https://api.github.com/users/easp/events{/privacy}", "received_events_url": "https://api.github.com/users/easp/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-10-29T19:21:02
2023-10-30T00:55:34
2023-10-29T19:35:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I tried pushing a model I'd created with capital letters in the name and repeatedly got a 401 error. It took me a while to figure out why. It seems like the error should be more descriptive and/or `ollama create` and `ollama cp` should enforce the lower-case only rule.
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.github.com/users/technovangelist/followers", "following_url": "https://api.github.com/users/technovangelist/following{/other_user}", "gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}", "starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions", "organizations_url": "https://api.github.com/users/technovangelist/orgs", "repos_url": "https://api.github.com/users/technovangelist/repos", "events_url": "https://api.github.com/users/technovangelist/events{/privacy}", "received_events_url": "https://api.github.com/users/technovangelist/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/947/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6930
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6930/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6930/comments
https://api.github.com/repos/ollama/ollama/issues/6930/events
https://github.com/ollama/ollama/issues/6930
2,545,018,031
I_kwDOJ0Z1Ps6XseSv
6,930
Tesla p40 24G with quadro M6000 24G can not work together
{ "login": "Blake110", "id": 98226493, "node_id": "U_kgDOBdrRPQ", "avatar_url": "https://avatars.githubusercontent.com/u/98226493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Blake110", "html_url": "https://github.com/Blake110", "followers_url": "https://api.github.com/users/Blake110/followers", "following_url": "https://api.github.com/users/Blake110/following{/other_user}", "gists_url": "https://api.github.com/users/Blake110/gists{/gist_id}", "starred_url": "https://api.github.com/users/Blake110/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Blake110/subscriptions", "organizations_url": "https://api.github.com/users/Blake110/orgs", "repos_url": "https://api.github.com/users/Blake110/repos", "events_url": "https://api.github.com/users/Blake110/events{/privacy}", "received_events_url": "https://api.github.com/users/Blake110/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
10
2024-09-24T10:35:14
2024-09-27T10:32:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? P40 with M6000, just P40 works, and M6000 memory not be used by ollama. even modified ollama.service for multi GPU. I try to use P40 with 1080ti, works fine with default ollama.service. P40 with RTX 2060, works fine with default ollama.service. anyone can tell me why and is there a chance to make them working together? Thx. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.11
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6930/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6003
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6003/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6003/comments
https://api.github.com/repos/ollama/ollama/issues/6003/events
https://github.com/ollama/ollama/issues/6003
2,433,120,533
I_kwDOJ0Z1Ps6RBnkV
6,003
AMD Radeon RX 6750 XT Support
{ "login": "SmollClover", "id": 39840298, "node_id": "MDQ6VXNlcjM5ODQwMjk4", "avatar_url": "https://avatars.githubusercontent.com/u/39840298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SmollClover", "html_url": "https://github.com/SmollClover", "followers_url": "https://api.github.com/users/SmollClover/followers", "following_url": "https://api.github.com/users/SmollClover/following{/other_user}", "gists_url": "https://api.github.com/users/SmollClover/gists{/gist_id}", "starred_url": "https://api.github.com/users/SmollClover/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SmollClover/subscriptions", "organizations_url": "https://api.github.com/users/SmollClover/orgs", "repos_url": "https://api.github.com/users/SmollClover/repos", "events_url": "https://api.github.com/users/SmollClover/events{/privacy}", "received_events_url": "https://api.github.com/users/SmollClover/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-07-27T00:16:23
2024-07-28T14:59:35
2024-07-28T14:59:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Currently, as it seems, the Radeon RX 6750 XT isn't supported by Ollama and trying to force it to use it using `env HSA_OVERRIDE_GFX_VERSION=gfx1031 ollama serve` results in it being unable to initialize the tensile host. Edit: Without the HSA_OVERRIDE_GFX_VERSION, it just states that it wasn't able to find a supported GPU. <details> <summary>Ollama Serve Logs</summary> ``` env HSA_OVERRIDE_GFX_VERSION=gfx1031 ollama serve 2024/07/27 02:13:11 routes.go:1099: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:gfx1031 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/name/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-27T02:13:11.712+02:00 level=INFO source=images.go:784 msg="total blobs: 6" time=2024-07-27T02:13:11.712+02:00 level=INFO source=images.go:791 msg="total unused blobs removed: 0" time=2024-07-27T02:13:11.712+02:00 level=INFO source=routes.go:1146 msg="Listening on 127.0.0.1:11434 (version 0.3.0)" time=2024-07-27T02:13:11.712+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama924064373/runners time=2024-07-27T02:13:14.557+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 rocm_v60102 cpu cpu_avx cpu_avx2]" time=2024-07-27T02:13:14.557+02:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-07-27T02:13:14.564+02:00 level=WARN source=amd_linux.go:58 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-07-27T02:13:14.567+02:00 level=INFO source=amd_linux.go:333 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=gfx1031 time=2024-07-27T02:13:14.567+02:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=rocm compute=gfx1031 driver=0.0 name=1002:73df total="12.0 GiB" available="10.6 GiB" [GIN] 2024/07/27 - 02:13:30 | 200 | 29.188µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/27 - 02:13:30 | 200 | 15.59507ms | 127.0.0.1 | POST "/api/show" time=2024-07-27T02:13:30.912+02:00 level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=81 layers.offload=18 layers.split="" memory.available="[10.6 GiB]" memory.required.full="39.3 GiB" memory.required.partial="10.2 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[10.2 GiB]" memory.weights.total="36.5 GiB" memory.weights.repeating="35.7 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB" time=2024-07-27T02:13:30.913+02:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama924064373/runners/rocm_v60102/ollama_llama_server --model /home/name/.ollama/models/blobs/sha256-9631a2551ac477921b64862e5cd4e13223183d7699ebf6bf6d14488edc8b4552 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 18 --parallel 1 --port 46711" time=2024-07-27T02:13:30.913+02:00 level=INFO source=sched.go:437 msg="loaded runners" count=1 time=2024-07-27T02:13:30.913+02:00 level=INFO source=server.go:583 msg="waiting for llama runner to start responding" time=2024-07-27T02:13:30.918+02:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="d94c6e0" tid="139702802625344" timestamp=1722039211 INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139702802625344" timestamp=1722039211 total_threads=8 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="46711" tid="139702802625344" timestamp=1722039211 llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from /home/name/.ollama/models/blobs/sha256-9631a2551ac477921b64862e5cd4e13223183d7699ebf6bf6d14488edc8b4552 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = dolphin-2.9-llama3-70b llama_model_loader: - kv 2: llama.block_count u32 = 80 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128258 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 time=2024-07-27T02:13:31.221+02:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128258] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128258] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128256 llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 20: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q4_0: 561 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special tokens cache size = 258 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128258 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) llm_load_print_meta: general.name = dolphin-2.9-llama3-70b llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128256 '<|im_end|>' llm_load_print_meta: PAD token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128256 '<|im_end|>' llm_load_print_meta: max token length = 256 /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory rocBLAS error: Could not initialize Tensile host: No devices found time=2024-07-27T02:13:32.174+02:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server not responding" time=2024-07-27T02:13:32.563+02:00 level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server error" time=2024-07-27T02:13:32.813+02:00 level=ERROR source=sched.go:443 msg="error loading llama server" error="llama runner process has terminated: error:Could not initialize Tensile host: No devices found" ``` </details> <details> <summary>ROCMInfo</summary> ``` ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.13 Runtime Ext Version: 1.4 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz Uuid: CPU-XX Marketing Name: Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 4700 BDFID: 0 Internal Node ID: 0 Compute Unit: 8 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 65746152(0x3eb34e8) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 65746152(0x3eb34e8) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 65746152(0x3eb34e8) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx1031 Uuid: GPU-XX Marketing Name: AMD Radeon RX 6750 XT Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 16(0x10) KB L2: 3072(0xc00) KB L3: 98304(0x18000) KB Chip ID: 29663(0x73df) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2880 BDFID: 768 Internal Node ID: 1 Compute Unit: 40 SIMDs per CU: 2 Shader Engines: 2 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 118 SDMA engine uCode:: 80 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 12566528(0xbfc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 12566528(0xbfc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Recommended Granule:0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1031 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done *** ``` </details> ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version 0.3.0
{ "login": "SmollClover", "id": 39840298, "node_id": "MDQ6VXNlcjM5ODQwMjk4", "avatar_url": "https://avatars.githubusercontent.com/u/39840298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SmollClover", "html_url": "https://github.com/SmollClover", "followers_url": "https://api.github.com/users/SmollClover/followers", "following_url": "https://api.github.com/users/SmollClover/following{/other_user}", "gists_url": "https://api.github.com/users/SmollClover/gists{/gist_id}", "starred_url": "https://api.github.com/users/SmollClover/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SmollClover/subscriptions", "organizations_url": "https://api.github.com/users/SmollClover/orgs", "repos_url": "https://api.github.com/users/SmollClover/repos", "events_url": "https://api.github.com/users/SmollClover/events{/privacy}", "received_events_url": "https://api.github.com/users/SmollClover/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6003/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4210
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4210/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4210/comments
https://api.github.com/repos/ollama/ollama/issues/4210/events
https://github.com/ollama/ollama/issues/4210
2,281,890,062
I_kwDOJ0Z1Ps6IAuEO
4,210
if the template is correct?
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/taozhiyuai/followers", "following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}", "gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}", "starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions", "organizations_url": "https://api.github.com/users/taozhiyuai/orgs", "repos_url": "https://api.github.com/users/taozhiyuai/repos", "events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}", "received_events_url": "https://api.github.com/users/taozhiyuai/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-05-06T22:24:17
2024-05-09T16:42:05
2024-05-07T16:46:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I try to import https://hf-mirror.com/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF the template from this HF webpage is ' <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ' the model file I use is the following ' FROM /Users/taozhiyu/Downloads/M-GGUF/Hermes-2-Pro-Llama-3-8B-GGUF/Q8/Hermes-2-Pro-Llama-3-8B-Q8_0.gguf TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""" PARAMETER num_keep 24 PARAMETER stop "<|start_header_id|>" PARAMETER stop "<|end_header_id|>" PARAMETER stop "<|eot_id|>"ol ' the inference output is not normal. anyone know how to modify the model file?thanks. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.32
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4210/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4071
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4071/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4071/comments
https://api.github.com/repos/ollama/ollama/issues/4071/events
https://github.com/ollama/ollama/issues/4071
2,273,059,271
I_kwDOJ0Z1Ps6HfCHH
4,071
ollama pull llama3 error
{ "login": "wisepmlin", "id": 74945717, "node_id": "MDQ6VXNlcjc0OTQ1NzE3", "avatar_url": "https://avatars.githubusercontent.com/u/74945717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wisepmlin", "html_url": "https://github.com/wisepmlin", "followers_url": "https://api.github.com/users/wisepmlin/followers", "following_url": "https://api.github.com/users/wisepmlin/following{/other_user}", "gists_url": "https://api.github.com/users/wisepmlin/gists{/gist_id}", "starred_url": "https://api.github.com/users/wisepmlin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wisepmlin/subscriptions", "organizations_url": "https://api.github.com/users/wisepmlin/orgs", "repos_url": "https://api.github.com/users/wisepmlin/repos", "events_url": "https://api.github.com/users/wisepmlin/events{/privacy}", "received_events_url": "https://api.github.com/users/wisepmlin/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-05-01T07:17:41
2024-05-01T20:38:39
2024-05-01T20:38:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama pull llama3 Error: pull model manifest: Get "https://ollama.com/token?nonce=EBGaz66AqKbJTscDMcl-ag&scope=repository%!A(MISSING)library%!F(MISSING)llama3%!A(MISSING)pull&service=ollama.com&ts=1714547177": read tcp 192.168.188.104:49346->34.120.132.20:443: read: connection reset by peer Can't I prioritize it? I can't use it locally ### OS macOS ### GPU Intel ### CPU Intel ### Ollama version llama2&llama3
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4071/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5947
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5947/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5947/comments
https://api.github.com/repos/ollama/ollama/issues/5947/events
https://github.com/ollama/ollama/issues/5947
2,429,706,051
I_kwDOJ0Z1Ps6Q0l9D
5,947
Would be cool to find somewhere how to upgrade ollama 0.2.5 to 0.2.8 on MacOS
{ "login": "deniercounter", "id": 24805904, "node_id": "MDQ6VXNlcjI0ODA1OTA0", "avatar_url": "https://avatars.githubusercontent.com/u/24805904?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deniercounter", "html_url": "https://github.com/deniercounter", "followers_url": "https://api.github.com/users/deniercounter/followers", "following_url": "https://api.github.com/users/deniercounter/following{/other_user}", "gists_url": "https://api.github.com/users/deniercounter/gists{/gist_id}", "starred_url": "https://api.github.com/users/deniercounter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deniercounter/subscriptions", "organizations_url": "https://api.github.com/users/deniercounter/orgs", "repos_url": "https://api.github.com/users/deniercounter/repos", "events_url": "https://api.github.com/users/deniercounter/events{/privacy}", "received_events_url": "https://api.github.com/users/deniercounter/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
4
2024-07-25T11:18:09
2024-07-25T12:26:33
2024-07-25T12:26:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
To find the command to update ollama on a MacOS it seems one has to join a discord server. Really?
{ "login": "deniercounter", "id": 24805904, "node_id": "MDQ6VXNlcjI0ODA1OTA0", "avatar_url": "https://avatars.githubusercontent.com/u/24805904?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deniercounter", "html_url": "https://github.com/deniercounter", "followers_url": "https://api.github.com/users/deniercounter/followers", "following_url": "https://api.github.com/users/deniercounter/following{/other_user}", "gists_url": "https://api.github.com/users/deniercounter/gists{/gist_id}", "starred_url": "https://api.github.com/users/deniercounter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deniercounter/subscriptions", "organizations_url": "https://api.github.com/users/deniercounter/orgs", "repos_url": "https://api.github.com/users/deniercounter/repos", "events_url": "https://api.github.com/users/deniercounter/events{/privacy}", "received_events_url": "https://api.github.com/users/deniercounter/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5947/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4441
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4441/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4441/comments
https://api.github.com/repos/ollama/ollama/issues/4441/events
https://github.com/ollama/ollama/pull/4441
2,296,601,945
PR_kwDOJ0Z1Ps5vdnRX
4,441
Use DRM driver for VRAM info for amd
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-05-14T23:58:44
2024-06-06T17:57:38
2024-06-06T17:57:34
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4441", "html_url": "https://github.com/ollama/ollama/pull/4441", "diff_url": "https://github.com/ollama/ollama/pull/4441.diff", "patch_url": "https://github.com/ollama/ollama/pull/4441.patch", "merged_at": null }
The amdgpu drivers free VRAM reporting omits some other apps, so leverage the upstream DRM driver which keeps better tabs on things Marking draft until I can do more testing... Fixes #3765
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4441/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4441/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/941
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/941/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/941/comments
https://api.github.com/repos/ollama/ollama/issues/941/events
https://github.com/ollama/ollama/issues/941
1,966,683,283
I_kwDOJ0Z1Ps51OTST
941
`digest mismatch` on download
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
116
2023-10-28T17:47:23
2025-01-30T02:21:24
null
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
While rare, `ollama pull` will sometimes result in a digest mismatch on download ``` % ollama run wizard-vicuna-uncensored:30b-q5_K_M pulling manifest pulling b1571c5cbd28... 100% |█████████████████████████████████████████████████████████████████████████████████████████████████████████| (45/45 B, 34 B/s) pulling d14264189a8a... 100% |█████████████████████████████████████████████████████████████████████████████████████████████████████████| (31/31 B, 17 B/s) pulling c4c2b65331ba... 100% |██████████████████████████████████████████████████████████████████████████████████████████████████████| (384/384 B, 238 B/s) verifying sha256 digest Error: digest mismatch, file must be downloaded again: want sha256:1a640cd4d69a5260bcc807a531f82ddb3890ebf49bc2a323e60a9290547135c1, got sha256:5eef5d8ec5ce977b74f91524c0002f9a7adeb61606cdbdad6460e25d58d0f454 ```
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/941/reactions", "total_count": 25, "+1": 25, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/941/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7265
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7265/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7265/comments
https://api.github.com/repos/ollama/ollama/issues/7265/events
https://github.com/ollama/ollama/pull/7265
2,598,655,929
PR_kwDOJ0Z1Ps5_KCM1
7,265
Migrate off centos 7 for intermediate build layers in container image builds
{ "login": "cazlo", "id": 3895350, "node_id": "MDQ6VXNlcjM4OTUzNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/3895350?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cazlo", "html_url": "https://github.com/cazlo", "followers_url": "https://api.github.com/users/cazlo/followers", "following_url": "https://api.github.com/users/cazlo/following{/other_user}", "gists_url": "https://api.github.com/users/cazlo/gists{/gist_id}", "starred_url": "https://api.github.com/users/cazlo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cazlo/subscriptions", "organizations_url": "https://api.github.com/users/cazlo/orgs", "repos_url": "https://api.github.com/users/cazlo/repos", "events_url": "https://api.github.com/users/cazlo/events{/privacy}", "received_events_url": "https://api.github.com/users/cazlo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
1
2024-10-19T02:03:54
2024-12-05T22:25:43
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7265", "html_url": "https://github.com/ollama/ollama/pull/7265", "diff_url": "https://github.com/ollama/ollama/pull/7265.diff", "patch_url": "https://github.com/ollama/ollama/pull/7265.patch", "merged_at": null }
# What Migrate dependencies in the container image build to later supported versions: - centos 7 -> rockylinux 8 - gcc 10.2 -> gcc 11.2 - cuda 11.3.1 -> 11.7.1 # Why Closes #7260 . Avoids a compile issue with gcc 10.3. Avoids end of life of centos 7. More info on justification is available at #7260 # Testing See also https://github.com/cazlo/ollama/pull/3. - No arm64 compile issues observed (when forcing CI builds of this arch) - No amd64 compile issues observed for rocm - No amd64 runtime issues observed for rocm when running with 7900xt hardware (compared performance of baseline prompts to those done on HEAD of main and saw no significant deviation; sampleset size was ~240 prompts across 4 human language and 4 computer language models. no crashes or other weirdness observed in the test runs) I don't have Nvidia or Apple hardware easily available to test those runtimes; I only tested on modern AMD GPU runtimes.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7265/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4684
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4684/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4684/comments
https://api.github.com/repos/ollama/ollama/issues/4684/events
https://github.com/ollama/ollama/issues/4684
2,321,811,440
I_kwDOJ0Z1Ps6KZAfw
4,684
Model download finally fails behind company firewall
{ "login": "berndgoetz", "id": 227312, "node_id": "MDQ6VXNlcjIyNzMxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/227312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/berndgoetz", "html_url": "https://github.com/berndgoetz", "followers_url": "https://api.github.com/users/berndgoetz/followers", "following_url": "https://api.github.com/users/berndgoetz/following{/other_user}", "gists_url": "https://api.github.com/users/berndgoetz/gists{/gist_id}", "starred_url": "https://api.github.com/users/berndgoetz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/berndgoetz/subscriptions", "organizations_url": "https://api.github.com/users/berndgoetz/orgs", "repos_url": "https://api.github.com/users/berndgoetz/repos", "events_url": "https://api.github.com/users/berndgoetz/events{/privacy}", "received_events_url": "https://api.github.com/users/berndgoetz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw", "url": "https://api.github.com/repos/ollama/ollama/labels/networking", "name": "networking", "color": "0B5368", "default": false, "description": "Issues relating to ollama pull and push" } ]
open
false
null
[]
null
2
2024-05-28T19:54:24
2024-10-23T16:07:25
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I want to make ollama available to our developer community in our company in order to learn to use the technology. We got ollama.com/* whitelisted through our company firewall, and it actually pretty much works quite well, but at the end of the model download, the process gets stuck: pulling 42ba7f8a01dd... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 557 B verifying sha256 digest Error: digest mismatch, file must be downloaded again: want sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246, got sha256:1dd340187fa2df434fb562efa41b824c8b1ef6f3f3f3bc401280f80d8adea61e After some digging, it seems that our malware scanner runs into a false positive with a file served from a cloudflare location ![image](https://github.com/ollama/ollama/assets/227312/c92406eb-f00a-41ee-a050-b26791df932c) The malware software reports this: "Trickling"... ![image](https://github.com/ollama/ollama/assets/227312/a34df958-d6ba-47a0-b76d-91e4f0fee223) Any chance to get this changed/fixed? ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.1.30
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4684/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4684/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6411
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6411/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6411/comments
https://api.github.com/repos/ollama/ollama/issues/6411/events
https://github.com/ollama/ollama/pull/6411
2,472,453,285
PR_kwDOJ0Z1Ps54r_mE
6,411
server: limit upload parts to 16
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-08-19T04:52:54
2024-08-19T16:20:54
2024-08-19T16:20:52
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6411", "html_url": "https://github.com/ollama/ollama/pull/6411", "diff_url": "https://github.com/ollama/ollama/pull/6411.diff", "patch_url": "https://github.com/ollama/ollama/pull/6411.patch", "merged_at": "2024-08-19T16:20:52" }
In similar vein as https://github.com/ollama/ollama/pull/6347, limit the number of upload connections to 16.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6411/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6411/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3857
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3857/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3857/comments
https://api.github.com/repos/ollama/ollama/issues/3857/events
https://github.com/ollama/ollama/pull/3857
2,259,978,702
PR_kwDOJ0Z1Ps5tiMlC
3,857
Add back memory escape valve
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-04-24T00:09:27
2024-04-24T00:32:27
2024-04-24T00:32:24
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3857", "html_url": "https://github.com/ollama/ollama/pull/3857", "diff_url": "https://github.com/ollama/ollama/pull/3857.diff", "patch_url": "https://github.com/ollama/ollama/pull/3857.patch", "merged_at": "2024-04-24T00:32:24" }
If we get our predictions wrong, this can be used to set a lower memory limit as a workaround. Recent multi-gpu refactoring accidentally removed it, so this adds it back.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3857/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/957
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/957/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/957/comments
https://api.github.com/repos/ollama/ollama/issues/957/events
https://github.com/ollama/ollama/issues/957
1,971,350,354
I_kwDOJ0Z1Ps51gGtS
957
How do I create a Docker image containing a model?
{ "login": "flemzord", "id": 1952914, "node_id": "MDQ6VXNlcjE5NTI5MTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1952914?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flemzord", "html_url": "https://github.com/flemzord", "followers_url": "https://api.github.com/users/flemzord/followers", "following_url": "https://api.github.com/users/flemzord/following{/other_user}", "gists_url": "https://api.github.com/users/flemzord/gists{/gist_id}", "starred_url": "https://api.github.com/users/flemzord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flemzord/subscriptions", "organizations_url": "https://api.github.com/users/flemzord/orgs", "repos_url": "https://api.github.com/users/flemzord/repos", "events_url": "https://api.github.com/users/flemzord/events{/privacy}", "received_events_url": "https://api.github.com/users/flemzord/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
7
2023-10-31T21:49:04
2024-10-15T13:37:41
2024-03-11T19:05:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, I use Modelfile locally. I would like to deploy this one in production on a Kubernetes cluster, but I don't know how to proceed? How can I create a Docker image containing Ollama and the Model created from the Modelfile?
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/957/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/957/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7940
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7940/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7940/comments
https://api.github.com/repos/ollama/ollama/issues/7940/events
https://github.com/ollama/ollama/issues/7940
2,719,108,129
I_kwDOJ0Z1Ps6iEkwh
7,940
Mini-CPM-V-2.6-q8_0 produces incoherent responses after applying KV Cache q4_0 or q8_0.
{ "login": "SingularityMan", "id": 91804288, "node_id": "U_kgDOBXjSgA", "avatar_url": "https://avatars.githubusercontent.com/u/91804288?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SingularityMan", "html_url": "https://github.com/SingularityMan", "followers_url": "https://api.github.com/users/SingularityMan/followers", "following_url": "https://api.github.com/users/SingularityMan/following{/other_user}", "gists_url": "https://api.github.com/users/SingularityMan/gists{/gist_id}", "starred_url": "https://api.github.com/users/SingularityMan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SingularityMan/subscriptions", "organizations_url": "https://api.github.com/users/SingularityMan/orgs", "repos_url": "https://api.github.com/users/SingularityMan/repos", "events_url": "https://api.github.com/users/SingularityMan/events{/privacy}", "received_events_url": "https://api.github.com/users/SingularityMan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-12-05T01:28:31
2024-12-05T01:28:31
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? This happens when running ollama `/generate` via the python API. The output looks like the model is having a seizure. It seems to be able to see the images but its output is so random and erratic I can't make out anything from the text. I didn't change any other parameter about the model. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.8 RC
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7940/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2438
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2438/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2438/comments
https://api.github.com/repos/ollama/ollama/issues/2438/events
https://github.com/ollama/ollama/issues/2438
2,128,024,512
I_kwDOJ0Z1Ps5-1xPA
2,438
Issue with system messages being discarded after updating to v0.1.23
{ "login": "gaodeng", "id": 1118249, "node_id": "MDQ6VXNlcjExMTgyNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1118249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaodeng", "html_url": "https://github.com/gaodeng", "followers_url": "https://api.github.com/users/gaodeng/followers", "following_url": "https://api.github.com/users/gaodeng/following{/other_user}", "gists_url": "https://api.github.com/users/gaodeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/gaodeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaodeng/subscriptions", "organizations_url": "https://api.github.com/users/gaodeng/orgs", "repos_url": "https://api.github.com/users/gaodeng/repos", "events_url": "https://api.github.com/users/gaodeng/events{/privacy}", "received_events_url": "https://api.github.com/users/gaodeng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
0
2024-02-10T01:04:19
2024-02-12T23:06:58
2024-02-12T23:06:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Since the update to version v0.1.23, I have noticed that multiple system messages are being discarded when I send the following messages: ``` [ { "role": "system", "content": "You are an AI assistant called ‘BotGem’ that is based on the language model llama2. You are helpful, creative, clever, friendly, and honest.\n Current date: 2024-02-10" }, { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "hello" } ] ``` Upon opening the ollama debug mode, I observed the following output: ``` level=DEBUG source=routes.go:1165 msg="chat handler" prompt="[INST] <<SYS>>You are a helpful assistant.<</SYS>>\n\nhello [/INST]\n" ``` As you can see, the first system message is being discarded.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2438/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2583
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2583/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2583/comments
https://api.github.com/repos/ollama/ollama/issues/2583/events
https://github.com/ollama/ollama/issues/2583
2,141,229,415
I_kwDOJ0Z1Ps5_oJFn
2,583
How to make a PR to fix a modelfile?
{ "login": "WolframRavenwolf", "id": 52386626, "node_id": "MDQ6VXNlcjUyMzg2NjI2", "avatar_url": "https://avatars.githubusercontent.com/u/52386626?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WolframRavenwolf", "html_url": "https://github.com/WolframRavenwolf", "followers_url": "https://api.github.com/users/WolframRavenwolf/followers", "following_url": "https://api.github.com/users/WolframRavenwolf/following{/other_user}", "gists_url": "https://api.github.com/users/WolframRavenwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/WolframRavenwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WolframRavenwolf/subscriptions", "organizations_url": "https://api.github.com/users/WolframRavenwolf/orgs", "repos_url": "https://api.github.com/users/WolframRavenwolf/repos", "events_url": "https://api.github.com/users/WolframRavenwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/WolframRavenwolf/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-02-18T23:05:32
2024-05-16T22:58:08
2024-05-16T22:56:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Couldn't find the modelfiles in this repo, but would like to fix and make a PR for the Mixtral modelfile. Its prompt format is wrong, fixed it locally, but how to contribute that back to the project?
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2583/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2583/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6635
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6635/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6635/comments
https://api.github.com/repos/ollama/ollama/issues/6635/events
https://github.com/ollama/ollama/issues/6635
2,505,819,425
I_kwDOJ0Z1Ps6VW8Uh
6,635
Moondream2 needs an update
{ "login": "ddpasa", "id": 112642920, "node_id": "U_kgDOBrbLaA", "avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddpasa", "html_url": "https://github.com/ddpasa", "followers_url": "https://api.github.com/users/ddpasa/followers", "following_url": "https://api.github.com/users/ddpasa/following{/other_user}", "gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}", "starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions", "organizations_url": "https://api.github.com/users/ddpasa/orgs", "repos_url": "https://api.github.com/users/ddpasa/repos", "events_url": "https://api.github.com/users/ddpasa/events{/privacy}", "received_events_url": "https://api.github.com/users/ddpasa/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-09-04T16:26:51
2024-11-19T23:24:41
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
moondream2 is an amazing tiny little VLM. The owner (https://github.com/vikhyat) releases updates quite frequently. I'm not sure which version ollama currently has, but there was a new release last week (2024-08-26) which is not in ollama. https://huggingface.co/vikhyatk/moondream2
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6635/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6635/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7888
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7888/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7888/comments
https://api.github.com/repos/ollama/ollama/issues/7888/events
https://github.com/ollama/ollama/pull/7888
2,706,590,591
PR_kwDOJ0Z1Ps6DnXB7
7,888
Enable index tracking for tools - openai api support
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-11-30T03:42:10
2024-11-30T04:00:11
2024-11-30T04:00:09
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7888", "html_url": "https://github.com/ollama/ollama/pull/7888", "diff_url": "https://github.com/ollama/ollama/pull/7888.diff", "patch_url": "https://github.com/ollama/ollama/pull/7888.patch", "merged_at": "2024-11-30T04:00:09" }
Closes https://github.com/ollama/ollama/issues/7881 Now able to use `client.beta.chat.completions.stream` <img width="570" alt="image" src="https://github.com/user-attachments/assets/32c48aca-b23a-40b8-b32d-2fcb667d2d81">
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7888/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7888/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2152
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2152/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2152/comments
https://api.github.com/repos/ollama/ollama/issues/2152/events
https://github.com/ollama/ollama/issues/2152
2,095,148,484
I_kwDOJ0Z1Ps584W3E
2,152
True SVG of Ollama logo?
{ "login": "sqs", "id": 1976, "node_id": "MDQ6VXNlcjE5NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/1976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sqs", "html_url": "https://github.com/sqs", "followers_url": "https://api.github.com/users/sqs/followers", "following_url": "https://api.github.com/users/sqs/following{/other_user}", "gists_url": "https://api.github.com/users/sqs/gists{/gist_id}", "starred_url": "https://api.github.com/users/sqs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sqs/subscriptions", "organizations_url": "https://api.github.com/users/sqs/orgs", "repos_url": "https://api.github.com/users/sqs/repos", "events_url": "https://api.github.com/users/sqs/events{/privacy}", "received_events_url": "https://api.github.com/users/sqs/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
4
2024-01-23T03:01:39
2024-07-14T20:49:16
2024-01-23T04:49:21
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I see https://github.com/jmorganca/ollama/blob/a0a829bf7a29b532f4bebe00e7cb1304ff9f0190/app/src/ollama.svg, but it's an SVG that embeds PNG data. Is there a true SVG of the Ollama logo? I would like to use it in the model selection dropdown in Cody: ![image](https://github.com/jmorganca/ollama/assets/1976/8d2a173a-8e54-4cb8-9e30-bc26186a2a11) (Not urgent!)
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2152/timeline
null
completed
false