url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/7049
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7049/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7049/comments
|
https://api.github.com/repos/ollama/ollama/issues/7049/events
|
https://github.com/ollama/ollama/issues/7049
| 2,557,134,462
|
I_kwDOJ0Z1Ps6YasZ-
| 7,049
|
ollama does not detect Quadro RTX 4000 - cuda driver library failed to get device context 801
|
{
"login": "mfzhsn",
"id": 5251972,
"node_id": "MDQ6VXNlcjUyNTE5NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5251972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfzhsn",
"html_url": "https://github.com/mfzhsn",
"followers_url": "https://api.github.com/users/mfzhsn/followers",
"following_url": "https://api.github.com/users/mfzhsn/following{/other_user}",
"gists_url": "https://api.github.com/users/mfzhsn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfzhsn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfzhsn/subscriptions",
"organizations_url": "https://api.github.com/users/mfzhsn/orgs",
"repos_url": "https://api.github.com/users/mfzhsn/repos",
"events_url": "https://api.github.com/users/mfzhsn/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfzhsn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-09-30T16:24:28
| 2024-11-25T19:04:01
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi All,
I installed ollama both (on machine/docker) both with same behaviour of not detecting the GPU. Have LM Studio on the same machine which picks up GPU without any issues.
```
root@d50a3f8d8474:/# ollama run phi3.5:3.8b-mini-instruct-q2_K ""
root@d50a3f8d8474:/# ollama ps
NAME ID SIZE PROCESSOR UNTIL
phi3.5:3.8b-mini-instruct-q2_K 45b8dc82a846 5.3 GB 100% CPU 4 minutes from now
```
**Installation**
```
[root@ai ~]# curl -fsSL https://ollama.com/install.sh | sh
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%#=#=# ######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> NVIDIA GPU installed.
```
*Logs from package installation*
```
[root@ai ~]# OLLAMA_DEBUG=1 ollama serve
Error: listen tcp 127.0.0.1:11434: bind: address already in use
[root@ai ~]# systemctl stop ollama
[root@ai ~]# OLLAMA_DEBUG=1 ollama serve
2024/09/29 03:47:20 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-09-29T03:47:20.643-05:00 level=INFO source=images.go:753 msg="total blobs: 10"
time=2024-09-29T03:47:20.672-05:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-29T03:47:20.672-05:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)"
time=2024-09-29T03:47:20.673-05:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3184037398/runners
time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libggml.so.gz
time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libllama.so.gz
time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/ollama_llama_server.gz
time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libggml.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libllama.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/ollama_llama_server.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libggml.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libllama.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/ollama_llama_server.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libggml.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libllama.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/ollama_llama_server.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libggml.so.gz
time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libllama.so.gz
time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/ollama_llama_server.gz
time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libggml.so.gz
time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libllama.so.gz
time=2024-09-29T03:47:20.676-05:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/ollama_llama_server.gz
time=2024-09-29T03:47:32.712-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cpu/ollama_llama_server
time=2024-09-29T03:47:32.712-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cpu_avx/ollama_llama_server
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cpu_avx2/ollama_llama_server
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cuda_v11/ollama_llama_server
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cuda_v12/ollama_llama_server
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/rocm_v60102/ollama_llama_server
time=2024-09-29T03:47:32.713-05:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-09-29T03:47:32.713-05:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so*
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/local/cuda/lib64/libcuda.so* /root/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-09-29T03:47:32.715-05:00 level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths=[/usr/lib64/libcuda.so.560.35.03]
CUDA driver version: 12.6
time=2024-09-29T03:47:32.878-05:00 level=DEBUG source=gpu.go:118 msg="detected GPUs" count=1 library=/usr/lib64/libcuda.so.560.35.03
time=2024-09-29T03:47:32.907-05:00 level=INFO source=gpu.go:252 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 801"
time=2024-09-29T03:47:32.907-05:00 level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2024-09-29T03:47:32.907-05:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
releasing cuda driver library
time=2024-09-29T03:47:32.907-05:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="251.1 GiB" available="240.5 GiB"
```
**Logs from Docker installation**
```
[root@ai ~]# docker logs -f ollama
2024/09/30 15:58:28 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-09-30T15:58:28.508Z level=INFO source=images.go:753 msg="total blobs: 6"
time=2024-09-30T15:58:28.509Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-30T15:58:28.509Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)"
time=2024-09-30T15:58:28.510Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]"
time=2024-09-30T15:58:28.510Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-09-30T15:58:28.670Z level=INFO source=gpu.go:252 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 801"
time=2024-09-30T15:58:28.670Z level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
time=2024-09-30T15:58:28.670Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="251.1 GiB" available="240.4 GiB"
[GIN] 2024/09/30 - 15:59:07 | 200 | 94.19µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/09/30 - 15:59:07 | 200 | 10.954794ms | 127.0.0.1 | POST "/api/show"
time=2024-09-30T15:59:07.334Z level=INFO source=server.go:103 msg="system memory" total="251.1 GiB" free="240.5 GiB" free_swap="4.0 GiB"
time=2024-09-30T15:59:07.334Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[240.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.3 GiB" memory.required.partial="0 B" memory.required.kv="4.0 GiB" memory.required.allocations="[8.3 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="7.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="681.0 MiB"
time=2024-09-30T15:59:07.338Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --numa distribute --parallel 4 --port 39753"
time=2024-09-30T15:59:07.339Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-09-30T15:59:07.339Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
time=2024-09-30T15:59:07.339Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
WARNING: /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance
INFO [main] build info | build=10 commit="070c75f" tid="140389372093376" timestamp=1727711947
INFO [main] system info | n_threads=20 n_threads_batch=20 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140389372093376" timestamp=1727711947 total_threads=40
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="39" port="39753" tid="140389372093376" timestamp=1727711947
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
```
I am able to get all the outputs
```
[root@ai ~]# nvidia-smi
Sat Sep 28 01:07:12 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Quadro RTX 4000 Off | 00000000:37:00.0 Off | N/A |
| 30% 34C P8 9W / 125W | 1MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
```
```
[root@ai ~]# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Aug_14_10:10:22_PDT_2024
Cuda compilation tools, release 12.6, V12.6.68
Build cuda_12.6.r12.6/compiler.34714021_0
```
**OS**
Linux Rocky 9.4
```
[root@ai ~]# uname -r
5.14.0-427.37.1.el9_4.x86_64
```
**logs**
```
[root@ai ~]# sudo dmesg | grep -i nvidia
[ 1.704573] Loaded X.509 cert 'Rocky Enterprise Software Foundation: Nvidia GPU OOT Signing 101: 816ba9c770e6960cefe378020865d4ebbc352a7d'
[ 6.270595] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input6
[ 6.270694] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input7
[ 6.270796] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input8
[ 6.270843] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input9
[ 7.812685] nvidia: loading out-of-tree module taints kernel.
[ 7.812696] nvidia: module license 'NVIDIA' taints kernel.
[ 7.836076] nvidia: module verification failed: signature and/or required key missing - tainting kernel
[ 7.950180] nvidia-nvlink: Nvlink Core is being initialized, major device number 510
[ 7.951760] nvidia 0000:37:00.0: enabling device (0140 -> 0143)
[ 7.951842] nvidia 0000:37:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=none:owns=none
[ 8.001592] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 560.35.03 Fri Aug 16 21:39:15 UTC 2024
[ 8.115320] nvidia_uvm: module uses symbols from proprietary module nvidia, inheriting taint.
[ 8.252789] nvidia-uvm: Loaded the UVM driver, major device number 508.
[ 8.307886] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 560.35.03 Fri Aug 16 21:21:48 UTC 2024
[ 8.323807] [drm] [nvidia-drm] [GPU ID 0x00003700] Loading driver
[ 9.814993] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:37:00.0 on minor 1
[ 9.815921] nvidia 0000:37:00.0: [drm] Cannot find any crtc or sizes
```
**additional logs**
```
[root@ai ~]# sudo dmesg | grep -i nvrm
[ 8.001592] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 560.35.03 Fri Aug 16 21:39:15 UTC 2024
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7049/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7049/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2914
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2914/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2914/comments
|
https://api.github.com/repos/ollama/ollama/issues/2914/events
|
https://github.com/ollama/ollama/issues/2914
| 2,167,049,192
|
I_kwDOJ0Z1Ps6BKovo
| 2,914
|
ollama run starcoder2:15b
|
{
"login": "limaolin2017",
"id": 28923721,
"node_id": "MDQ6VXNlcjI4OTIzNzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/28923721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/limaolin2017",
"html_url": "https://github.com/limaolin2017",
"followers_url": "https://api.github.com/users/limaolin2017/followers",
"following_url": "https://api.github.com/users/limaolin2017/following{/other_user}",
"gists_url": "https://api.github.com/users/limaolin2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/limaolin2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/limaolin2017/subscriptions",
"organizations_url": "https://api.github.com/users/limaolin2017/orgs",
"repos_url": "https://api.github.com/users/limaolin2017/repos",
"events_url": "https://api.github.com/users/limaolin2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/limaolin2017/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-04T14:57:44
| 2024-03-04T15:18:58
| 2024-03-04T15:12:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have encountered an error with Apple silicon M1 pro:
ollama run starcoder2:15b
Error: Post "http://127.0.0.1:11434/api/chat": EOF
|
{
"login": "limaolin2017",
"id": 28923721,
"node_id": "MDQ6VXNlcjI4OTIzNzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/28923721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/limaolin2017",
"html_url": "https://github.com/limaolin2017",
"followers_url": "https://api.github.com/users/limaolin2017/followers",
"following_url": "https://api.github.com/users/limaolin2017/following{/other_user}",
"gists_url": "https://api.github.com/users/limaolin2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/limaolin2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/limaolin2017/subscriptions",
"organizations_url": "https://api.github.com/users/limaolin2017/orgs",
"repos_url": "https://api.github.com/users/limaolin2017/repos",
"events_url": "https://api.github.com/users/limaolin2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/limaolin2017/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2914/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5251
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5251/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5251/comments
|
https://api.github.com/repos/ollama/ollama/issues/5251/events
|
https://github.com/ollama/ollama/issues/5251
| 2,369,624,310
|
I_kwDOJ0Z1Ps6NPZj2
| 5,251
|
how to install this in my steam deck?
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-06-24T08:46:38
| 2024-06-25T16:19:14
| 2024-06-25T16:18:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
cant install this in steam os 3
i think this is the issue for fedora silverblue like os opensuse aeon also
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5251/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7377
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7377/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7377/comments
|
https://api.github.com/repos/ollama/ollama/issues/7377/events
|
https://github.com/ollama/ollama/pull/7377
| 2,616,184,675
|
PR_kwDOJ0Z1Ps5__7QO
| 7,377
|
readme: add TextCraft to community integrations
|
{
"login": "suncloudsmoon",
"id": 34616349,
"node_id": "MDQ6VXNlcjM0NjE2MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/34616349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suncloudsmoon",
"html_url": "https://github.com/suncloudsmoon",
"followers_url": "https://api.github.com/users/suncloudsmoon/followers",
"following_url": "https://api.github.com/users/suncloudsmoon/following{/other_user}",
"gists_url": "https://api.github.com/users/suncloudsmoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suncloudsmoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suncloudsmoon/subscriptions",
"organizations_url": "https://api.github.com/users/suncloudsmoon/orgs",
"repos_url": "https://api.github.com/users/suncloudsmoon/repos",
"events_url": "https://api.github.com/users/suncloudsmoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/suncloudsmoon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-26T22:34:06
| 2024-11-04T00:53:51
| 2024-11-04T00:53:51
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7377",
"html_url": "https://github.com/ollama/ollama/pull/7377",
"diff_url": "https://github.com/ollama/ollama/pull/7377.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7377.patch",
"merged_at": "2024-11-04T00:53:51"
}
|
Hey everyone! I've recently been working on an extension for Word that aims to be a local, privacy-friendly alternative to Microsoft 365 Copilot by utilizing Ollama as the backend. I would like to introduce TextCraft, which is an add-in for Word that seamlessly integrates essential AI tools, including text generation, proofreading, and more, directly into the user interface. I think it would be great if more people are aware of this local alternative to Copilot. Thank you.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7377/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6713
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6713/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6713/comments
|
https://api.github.com/repos/ollama/ollama/issues/6713/events
|
https://github.com/ollama/ollama/issues/6713
| 2,514,609,406
|
I_kwDOJ0Z1Ps6V4eT-
| 6,713
|
Talking to Mistral-Nemo via OpenAI tool calling - fails
|
{
"login": "ChristianWeyer",
"id": 888718,
"node_id": "MDQ6VXNlcjg4ODcxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/888718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChristianWeyer",
"html_url": "https://github.com/ChristianWeyer",
"followers_url": "https://api.github.com/users/ChristianWeyer/followers",
"following_url": "https://api.github.com/users/ChristianWeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/ChristianWeyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChristianWeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChristianWeyer/subscriptions",
"organizations_url": "https://api.github.com/users/ChristianWeyer/orgs",
"repos_url": "https://api.github.com/users/ChristianWeyer/repos",
"events_url": "https://api.github.com/users/ChristianWeyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChristianWeyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
open
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-09-09T18:12:02
| 2025-01-16T14:54:23
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
With this curl command:
```
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"mistral-nemo:12b-instruct-2407-fp16",
"messages": [
{
"role": "user",
"content": "What is the weather like in Boston?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}' | json_pp
```
we should be able to execute an OpenAI API compatible tool use call against `mistral-nemo`.
But I get this result:
```
{
"choices" : [
{
"finish_reason" : "stop",
"index" : 0,
"message" : {
"content" : "Glad to help! In which unit would you like the temperature?",
"role" : "assistant"
}
}
],
"created" : 1725905432,
"id" : "chatcmpl-677",
"model" : "mistral-nemo:12b-instruct-2407-fp16",
"object" : "chat.completion",
"system_fingerprint" : "fp_ollama",
"usage" : {
"completion_tokens" : 15,
"prompt_tokens" : 95,
"total_tokens" : 110
}
}
```
Is there a missing config or something similar like that?
Thanks.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.9
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6713/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4253
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4253/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4253/comments
|
https://api.github.com/repos/ollama/ollama/issues/4253/events
|
https://github.com/ollama/ollama/issues/4253
| 2,284,818,276
|
I_kwDOJ0Z1Ps6IL49k
| 4,253
|
A repeatable hang issue on Linux - dual radeon
|
{
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/users/eliranwong/followers",
"following_url": "https://api.github.com/users/eliranwong/following{/other_user}",
"gists_url": "https://api.github.com/users/eliranwong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliranwong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliranwong/subscriptions",
"organizations_url": "https://api.github.com/users/eliranwong/orgs",
"repos_url": "https://api.github.com/users/eliranwong/repos",
"events_url": "https://api.github.com/users/eliranwong/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliranwong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-05-08T06:49:58
| 2024-05-09T22:30:37
| 2024-05-09T22:08:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Experience a hang issue consistently.
Device information:
OS: Ubuntu, CPU: AMD Threadripper [AMD Ryzen Threadripper 7960X, 24 Cores, 48 Threads, 4.2GHz Base, 5.3GHz Turbo], Memory: 256GB RAM, Two GPUs: AMD RX 7900XTX + AMD RX 7900XTX
To reproduce the hang issue:
1. ollama run command-r-plus:104b
2. Ask a question and get a response
3. Ctrl+d to exit the session
4. Ask a question and get a response
5. Ctrl+d to exit the session
6. ollama run llama:70b
7. Ask a question and get a response
8. Ctrl+d to exit the session
9. ollama run command-r-plus:104b
Ollama hangs at step 9
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.34
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4253/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3000
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3000/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3000/comments
|
https://api.github.com/repos/ollama/ollama/issues/3000/events
|
https://github.com/ollama/ollama/issues/3000
| 2,175,452,464
|
I_kwDOJ0Z1Ps6BqsUw
| 3,000
|
Server hangs with no responsewhen running `gemma`
|
{
"login": "songsh",
"id": 2272252,
"node_id": "MDQ6VXNlcjIyNzIyNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2272252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songsh",
"html_url": "https://github.com/songsh",
"followers_url": "https://api.github.com/users/songsh/followers",
"following_url": "https://api.github.com/users/songsh/following{/other_user}",
"gists_url": "https://api.github.com/users/songsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songsh/subscriptions",
"organizations_url": "https://api.github.com/users/songsh/orgs",
"repos_url": "https://api.github.com/users/songsh/repos",
"events_url": "https://api.github.com/users/songsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/songsh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-03-08T07:29:25
| 2024-05-02T22:33:21
| 2024-05-02T22:33:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
serve dead, how i to check the problem ,where is logs
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3000/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7798
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7798/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7798/comments
|
https://api.github.com/repos/ollama/ollama/issues/7798/events
|
https://github.com/ollama/ollama/issues/7798
| 2,683,215,299
|
I_kwDOJ0Z1Ps6f7p3D
| 7,798
|
Is this a bug? (2GB model -> up to 20GB pagefile)
|
{
"login": "sebkont",
"id": 189359503,
"node_id": "U_kgDOC0lljw",
"avatar_url": "https://avatars.githubusercontent.com/u/189359503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebkont",
"html_url": "https://github.com/sebkont",
"followers_url": "https://api.github.com/users/sebkont/followers",
"following_url": "https://api.github.com/users/sebkont/following{/other_user}",
"gists_url": "https://api.github.com/users/sebkont/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sebkont/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebkont/subscriptions",
"organizations_url": "https://api.github.com/users/sebkont/orgs",
"repos_url": "https://api.github.com/users/sebkont/repos",
"events_url": "https://api.github.com/users/sebkont/events{/privacy}",
"received_events_url": "https://api.github.com/users/sebkont/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-11-22T13:22:13
| 2024-12-02T15:36:27
| 2024-12-02T15:36:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
My GPU is old (GTX 1070) with 8GB, but should still be enough for running a model based on Phi 3 Mini? [This one ](https://huggingface.co/v8karlo/UNCENSORED-Phi-3-mini-4k-geminified-Q4_K_M-GGUF)
Unfortunately what happens is 'ollama ps' says 20 GB 63%/37% CPU/GPU + C:/ drive instantly gets filled with 10-20GB of pagefile (which I don't think should be happening at all ?). But I rarely observe any spikes for GPU, mostly its idle, up to 10% at best during prompt responses.
Makes me think might be a bug or something weird going on, after all? How to stop it from dumping so much into Pagefile?
PS: if this matters, MODELS and TMPDIR pathways under Windows variables were changed to D:/ and I installed Ollama under my D:/ drive. My C is too small, that's why pagefile bothers me, besides wanted to limit it all to D.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.3
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7798/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5635
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5635/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5635/comments
|
https://api.github.com/repos/ollama/ollama/issues/5635/events
|
https://github.com/ollama/ollama/issues/5635
| 2,403,793,783
|
I_kwDOJ0Z1Ps6PRvt3
| 5,635
|
ollama not use all GPUs
|
{
"login": "mavershang",
"id": 8919917,
"node_id": "MDQ6VXNlcjg5MTk5MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8919917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mavershang",
"html_url": "https://github.com/mavershang",
"followers_url": "https://api.github.com/users/mavershang/followers",
"following_url": "https://api.github.com/users/mavershang/following{/other_user}",
"gists_url": "https://api.github.com/users/mavershang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mavershang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mavershang/subscriptions",
"organizations_url": "https://api.github.com/users/mavershang/orgs",
"repos_url": "https://api.github.com/users/mavershang/repos",
"events_url": "https://api.github.com/users/mavershang/events{/privacy}",
"received_events_url": "https://api.github.com/users/mavershang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-07-11T18:12:48
| 2024-07-29T21:25:42
| 2024-07-29T21:25:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I ran ollama on a server with 4x A100. It only uses 1 of them. Is there some setting need to be changed? Thanks

### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.1
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5635/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/49
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/49/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/49/comments
|
https://api.github.com/repos/ollama/ollama/issues/49/events
|
https://github.com/ollama/ollama/pull/49
| 1,792,432,386
|
PR_kwDOJ0Z1Ps5U26Op
| 49
|
Go run
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-06T23:03:39
| 2023-07-07T00:19:03
| 2023-07-07T00:18:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/49",
"html_url": "https://github.com/ollama/ollama/pull/49",
"diff_url": "https://github.com/ollama/ollama/pull/49.diff",
"patch_url": "https://github.com/ollama/ollama/pull/49.patch",
"merged_at": "2023-07-07T00:18:58"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/49/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/49/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5098
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5098/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5098/comments
|
https://api.github.com/repos/ollama/ollama/issues/5098/events
|
https://github.com/ollama/ollama/pull/5098
| 2,357,217,804
|
PR_kwDOJ0Z1Ps5yr-W8
| 5,098
|
feat: support setting the KV cache quant type
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-06-17T12:24:22
| 2024-06-29T01:17:07
| 2024-06-28T21:50:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5098",
"html_url": "https://github.com/ollama/ollama/pull/5098",
"diff_url": "https://github.com/ollama/ollama/pull/5098.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5098.patch",
"merged_at": null
}
|
WIP
Testing adding configuration to allow setting the KV cache type re: #5091
---
- Allow setting the KV cache type in the env and params.
- Allow setting flashattention in params (as well as the existing env).
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5098/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5098/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/637
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/637/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/637/comments
|
https://api.github.com/repos/ollama/ollama/issues/637/events
|
https://github.com/ollama/ollama/pull/637
| 1,918,203,306
|
PR_kwDOJ0Z1Ps5bex-l
| 637
|
windows runner fixes
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-28T20:13:13
| 2023-09-29T15:47:56
| 2023-09-29T15:47:55
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/637",
"html_url": "https://github.com/ollama/ollama/pull/637",
"diff_url": "https://github.com/ollama/ollama/pull/637.diff",
"patch_url": "https://github.com/ollama/ollama/pull/637.patch",
"merged_at": "2023-09-29T15:47:55"
}
|
- use filepath for runner files
- get embedded files with unix filepath
- runner is only available is embedded directories have files
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/637/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2490
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2490/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2490/comments
|
https://api.github.com/repos/ollama/ollama/issues/2490/events
|
https://github.com/ollama/ollama/issues/2490
| 2,134,150,180
|
I_kwDOJ0Z1Ps5_NIwk
| 2,490
|
[Question] Do not offload to CPU RAM
|
{
"login": "freQuensy23-coder",
"id": 64750224,
"node_id": "MDQ6VXNlcjY0NzUwMjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freQuensy23-coder",
"html_url": "https://github.com/freQuensy23-coder",
"followers_url": "https://api.github.com/users/freQuensy23-coder/followers",
"following_url": "https://api.github.com/users/freQuensy23-coder/following{/other_user}",
"gists_url": "https://api.github.com/users/freQuensy23-coder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freQuensy23-coder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freQuensy23-coder/subscriptions",
"organizations_url": "https://api.github.com/users/freQuensy23-coder/orgs",
"repos_url": "https://api.github.com/users/freQuensy23-coder/repos",
"events_url": "https://api.github.com/users/freQuensy23-coder/events{/privacy}",
"received_events_url": "https://api.github.com/users/freQuensy23-coder/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-02-14T11:32:17
| 2024-03-16T19:39:26
| 2024-03-11T18:28:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
By default, after some time of inactivity, ollama will automatically be offloaded from GPU memory, that caused some latency, especially to large models)
|
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyeva/followers",
"following_url": "https://api.github.com/users/hoyyeva/following{/other_user}",
"gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions",
"organizations_url": "https://api.github.com/users/hoyyeva/orgs",
"repos_url": "https://api.github.com/users/hoyyeva/repos",
"events_url": "https://api.github.com/users/hoyyeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoyyeva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2490/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/238
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/238/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/238/comments
|
https://api.github.com/repos/ollama/ollama/issues/238/events
|
https://github.com/ollama/ollama/issues/238
| 1,827,345,140
|
I_kwDOJ0Z1Ps5s6xL0
| 238
|
Ability to download LLAMA2 7b 32k context
|
{
"login": "jlarmstrongiv",
"id": 20903247,
"node_id": "MDQ6VXNlcjIwOTAzMjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/20903247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlarmstrongiv",
"html_url": "https://github.com/jlarmstrongiv",
"followers_url": "https://api.github.com/users/jlarmstrongiv/followers",
"following_url": "https://api.github.com/users/jlarmstrongiv/following{/other_user}",
"gists_url": "https://api.github.com/users/jlarmstrongiv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlarmstrongiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlarmstrongiv/subscriptions",
"organizations_url": "https://api.github.com/users/jlarmstrongiv/orgs",
"repos_url": "https://api.github.com/users/jlarmstrongiv/repos",
"events_url": "https://api.github.com/users/jlarmstrongiv/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlarmstrongiv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-07-29T06:23:31
| 2023-12-04T19:02:13
| 2023-12-04T19:02:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
- https://together.ai/blog/llama-2-7b-32k
- https://github.com/togethercomputer/OpenChatKit
- https://huggingface.co/togethercomputer/LLaMA-2-7B-32K
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/238/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/238/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1365
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1365/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1365/comments
|
https://api.github.com/repos/ollama/ollama/issues/1365/events
|
https://github.com/ollama/ollama/issues/1365
| 2,022,633,560
|
I_kwDOJ0Z1Ps54jvBY
| 1,365
|
llama_print_timings have disappeared from the logs.
|
{
"login": "madsamjp",
"id": 49611363,
"node_id": "MDQ6VXNlcjQ5NjExMzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/49611363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madsamjp",
"html_url": "https://github.com/madsamjp",
"followers_url": "https://api.github.com/users/madsamjp/followers",
"following_url": "https://api.github.com/users/madsamjp/following{/other_user}",
"gists_url": "https://api.github.com/users/madsamjp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madsamjp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madsamjp/subscriptions",
"organizations_url": "https://api.github.com/users/madsamjp/orgs",
"repos_url": "https://api.github.com/users/madsamjp/repos",
"events_url": "https://api.github.com/users/madsamjp/events{/privacy}",
"received_events_url": "https://api.github.com/users/madsamjp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-12-03T17:26:16
| 2024-01-20T00:18:22
| 2024-01-20T00:18:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
In a previous version of Ollama, following the logs (on Linux using `journalctl -t ollama -f`) would give helpful information after the model has finished with its response (such as tokens per second).
e.g. this:
```
Dec 03 14:58:42 osm-server ollama[20658]: llama server listening at http://127.0.0.1:54457
Dec 03 14:58:42 osm-server ollama[20658]: {"timestamp":1701615522,"level":"INFO","function":"main","line":1746,"message":"HTTP server listening","hostname":"127.0.0.1","port":54457}
Dec 03 14:58:42 osm-server ollama[20658]: {"timestamp":1701615522,"level":"INFO","function":"log_server_request","line":1233,"message":"request","remote_addr":"127.0.0.1","remote_port":51344,"statu>
Dec 03 14:58:42 osm-server ollama[937]: 2023/12/03 14:58:42 llama.go:492: llama runner started in 9.200880 seconds
Dec 03 14:58:50 osm-server ollama[20658]: {"timestamp":1701615530,"level":"INFO","function":"log_server_request","line":1233,"message":"request","remote_addr":"127.0.0.1","remote_port":51344,"statu>
Dec 03 14:58:50 osm-server ollama[937]: llama_print_timings: load time = 8317.76 ms
Dec 03 14:58:50 osm-server ollama[937]: llama_print_timings: sample time = 107.35 ms / 396 runs ( 0.27 ms per token, 3688.73 tokens per second)
Dec 03 14:58:50 osm-server ollama[937]: llama_print_timings: prompt eval time = 444.18 ms / 800 tokens ( 0.56 ms per token, 1801.06 tokens per second)
Dec 03 14:58:50 osm-server ollama[937]: llama_print_timings: eval time = 6696.50 ms / 395 runs ( 16.95 ms per token, 58.99 tokens per second)
Dec 03 14:58:50 osm-server ollama[937]: llama_print_timings: total time = 7335.31 ms
```
This was really handy, but since updating Ollama, I've noticed this helpful info has gone. Is there an environment variable I can set to get it back?
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1365/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6356
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6356/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6356/comments
|
https://api.github.com/repos/ollama/ollama/issues/6356/events
|
https://github.com/ollama/ollama/issues/6356
| 2,465,338,496
|
I_kwDOJ0Z1Ps6S8hSA
| 6,356
|
AMD Multiple GPU support
|
{
"login": "VitalickS",
"id": 10177561,
"node_id": "MDQ6VXNlcjEwMTc3NTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/10177561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VitalickS",
"html_url": "https://github.com/VitalickS",
"followers_url": "https://api.github.com/users/VitalickS/followers",
"following_url": "https://api.github.com/users/VitalickS/following{/other_user}",
"gists_url": "https://api.github.com/users/VitalickS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VitalickS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VitalickS/subscriptions",
"organizations_url": "https://api.github.com/users/VitalickS/orgs",
"repos_url": "https://api.github.com/users/VitalickS/repos",
"events_url": "https://api.github.com/users/VitalickS/events{/privacy}",
"received_events_url": "https://api.github.com/users/VitalickS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-08-14T09:25:57
| 2024-10-16T00:15:13
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Hi,
I think the current AMD ROCm doesn’t work well with multiple video cards. I have an XTX 7900 (24GB) and an XT 7900 (20GB). My processor also has a small integrated GPU, but that shouldn’t be a problem.
When I try to load the model llama3.1:70b (39GB):
1. It doesn’t crash, but it has an infinite load time (at least 10 minutes, maybe more).
2. My PC gets stuck; I can’t move my mouse or do anything else, including exiting the loading process with Ctrl+C.
3. It uses (not very actively) only one GPU
4. The CPU is also loaded in the server process (only a few cores), and the only way to exit this mode is to shut down with the power button.
here my [server.log](https://github.com/user-attachments/files/16610677/server.log)
I can try anything you want, just tell me what to do (recompile llama.cpp or something else).
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.6
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6356/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8661
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8661/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8661/comments
|
https://api.github.com/repos/ollama/ollama/issues/8661/events
|
https://github.com/ollama/ollama/issues/8661
| 2,818,282,626
|
I_kwDOJ0Z1Ps6n-5SC
| 8,661
|
Will Ollama run on the NPU(ANE) of Apple M silicon?
|
{
"login": "imJack6",
"id": 58357771,
"node_id": "MDQ6VXNlcjU4MzU3Nzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/58357771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imJack6",
"html_url": "https://github.com/imJack6",
"followers_url": "https://api.github.com/users/imJack6/followers",
"following_url": "https://api.github.com/users/imJack6/following{/other_user}",
"gists_url": "https://api.github.com/users/imJack6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imJack6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imJack6/subscriptions",
"organizations_url": "https://api.github.com/users/imJack6/orgs",
"repos_url": "https://api.github.com/users/imJack6/repos",
"events_url": "https://api.github.com/users/imJack6/events{/privacy}",
"received_events_url": "https://api.github.com/users/imJack6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-29T13:50:08
| 2025-01-29T13:50:08
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
RT
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8661/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8661/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3119
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3119/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3119/comments
|
https://api.github.com/repos/ollama/ollama/issues/3119/events
|
https://github.com/ollama/ollama/issues/3119
| 2,184,596,868
|
I_kwDOJ0Z1Ps6CNk2E
| 3,119
|
Tensor `token_embed.weight` has wrong shape
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-03-13T18:01:27
| 2024-03-13T20:30:47
| 2024-03-13T18:21:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

split from #2753
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3119/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4932
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4932/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4932/comments
|
https://api.github.com/repos/ollama/ollama/issues/4932/events
|
https://github.com/ollama/ollama/issues/4932
| 2,341,666,209
|
I_kwDOJ0Z1Ps6Lkv2h
| 4,932
|
Cant see installed models
|
{
"login": "ahgsql",
"id": 35695543,
"node_id": "MDQ6VXNlcjM1Njk1NTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/35695543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahgsql",
"html_url": "https://github.com/ahgsql",
"followers_url": "https://api.github.com/users/ahgsql/followers",
"following_url": "https://api.github.com/users/ahgsql/following{/other_user}",
"gists_url": "https://api.github.com/users/ahgsql/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahgsql/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahgsql/subscriptions",
"organizations_url": "https://api.github.com/users/ahgsql/orgs",
"repos_url": "https://api.github.com/users/ahgsql/repos",
"events_url": "https://api.github.com/users/ahgsql/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahgsql/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-06-08T13:51:57
| 2024-08-10T05:41:42
| 2024-08-09T23:51:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have 7 models installed and was using them till yesterday.
But now it re-tries to download them, even i have all manifests files and my blobs folder is over 18 GB.
After shutdown and restart of WSL, ollama is not running and i m trying with ollama serve cmd
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.1.38
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4932/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5323
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5323/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5323/comments
|
https://api.github.com/repos/ollama/ollama/issues/5323/events
|
https://github.com/ollama/ollama/issues/5323
| 2,377,959,722
|
I_kwDOJ0Z1Ps6NvMkq
| 5,323
|
Weird output with any typos in accepted commands
|
{
"login": "yoshimario",
"id": 8993080,
"node_id": "MDQ6VXNlcjg5OTMwODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8993080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoshimario",
"html_url": "https://github.com/yoshimario",
"followers_url": "https://api.github.com/users/yoshimario/followers",
"following_url": "https://api.github.com/users/yoshimario/following{/other_user}",
"gists_url": "https://api.github.com/users/yoshimario/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoshimario/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoshimario/subscriptions",
"organizations_url": "https://api.github.com/users/yoshimario/orgs",
"repos_url": "https://api.github.com/users/yoshimario/repos",
"events_url": "https://api.github.com/users/yoshimario/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoshimario/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-06-27T11:59:02
| 2024-06-27T22:31:00
| 2024-06-27T22:29:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Problem is program outputs weird text if the program experiences commands that are not in the program list. This should result in an error message instead of a never ending loop of erraneous output. This should be handled better instead of forcing the process to close using ctr + c.
`>>> exit
exit
cd /
exit
ls -la
cd /etc/rc.d/init.d
ls -lha
vim httpd
exit
ls -la
cd /etc/sysconfig
ls -la
vim httpd`
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.46
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5323/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4153
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4153/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4153/comments
|
https://api.github.com/repos/ollama/ollama/issues/4153/events
|
https://github.com/ollama/ollama/pull/4153
| 2,279,118,097
|
PR_kwDOJ0Z1Ps5ujE7R
| 4,153
|
Add GPU usage
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-05-04T17:10:08
| 2024-05-08T23:39:14
| 2024-05-08T23:39:11
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4153",
"html_url": "https://github.com/ollama/ollama/pull/4153",
"diff_url": "https://github.com/ollama/ollama/pull/4153.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4153.patch",
"merged_at": "2024-05-08T23:39:11"
}
|
Help users understand how much of the model fit into their GPU without having to resort to inspecting the server log
A few examples from different systems and models
```
eval rate: 4.40 tokens/s
gpu usage: 1 GPU (14/27 layers) 3.2 GB (2.0 GB GPU)
eval rate: 6.64 tokens/s
gpu usage: 1 GPU (27/27 layers) 3.2 GB
eval rate: 18.44 tokens/s
gpu usage: 2 GPUs (27/33 layers) 27 GB (24 GB GPU)
eval rate: 19.58 tokens/s
gpu usage: CPU (0/27 layers) 3.2 GB
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4153/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4153/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7815
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7815/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7815/comments
|
https://api.github.com/repos/ollama/ollama/issues/7815/events
|
https://github.com/ollama/ollama/issues/7815
| 2,687,656,860
|
I_kwDOJ0Z1Ps6gMmOc
| 7,815
|
Any fine-tuning ways?
|
{
"login": "Niifuji",
"id": 111742025,
"node_id": "U_kgDOBqkMSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/111742025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Niifuji",
"html_url": "https://github.com/Niifuji",
"followers_url": "https://api.github.com/users/Niifuji/followers",
"following_url": "https://api.github.com/users/Niifuji/following{/other_user}",
"gists_url": "https://api.github.com/users/Niifuji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Niifuji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Niifuji/subscriptions",
"organizations_url": "https://api.github.com/users/Niifuji/orgs",
"repos_url": "https://api.github.com/users/Niifuji/repos",
"events_url": "https://api.github.com/users/Niifuji/events{/privacy}",
"received_events_url": "https://api.github.com/users/Niifuji/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-24T12:14:14
| 2024-12-23T07:57:42
| 2024-12-23T07:57:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As I mentioned in the title, I want to "continuously" fine-tune a pre-trained model with my custom dataset and explore adding some "emotion" to it (not sure why this idea came to mind). If you have any features or suggestions for this, I would appreciate your input.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7815/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5187
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5187/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5187/comments
|
https://api.github.com/repos/ollama/ollama/issues/5187/events
|
https://github.com/ollama/ollama/pull/5187
| 2,364,767,566
|
PR_kwDOJ0Z1Ps5zF1G4
| 5,187
|
fix: skip os.removeAll() in assets.go if no PID
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-20T15:48:16
| 2024-06-20T15:53:26
| 2024-06-20T15:49:39
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5187",
"html_url": "https://github.com/ollama/ollama/pull/5187",
"diff_url": "https://github.com/ollama/ollama/pull/5187.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5187.patch",
"merged_at": null
}
|
we accidentally deleted every directory in $TMPDIR that was in the form "ollama*". Used errorcheck with PID to ensure directory is ours before deleting.
Resolves: https://github.com/ollama/ollama/issues/5129
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5187/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4131
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4131/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4131/comments
|
https://api.github.com/repos/ollama/ollama/issues/4131/events
|
https://github.com/ollama/ollama/issues/4131
| 2,278,060,992
|
I_kwDOJ0Z1Ps6HyHPA
| 4,131
|
Error "timed out waiting for llama runner to start: " on larger models.
|
{
"login": "CalvesGEH",
"id": 42101564,
"node_id": "MDQ6VXNlcjQyMTAxNTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/42101564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CalvesGEH",
"html_url": "https://github.com/CalvesGEH",
"followers_url": "https://api.github.com/users/CalvesGEH/followers",
"following_url": "https://api.github.com/users/CalvesGEH/following{/other_user}",
"gists_url": "https://api.github.com/users/CalvesGEH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CalvesGEH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CalvesGEH/subscriptions",
"organizations_url": "https://api.github.com/users/CalvesGEH/orgs",
"repos_url": "https://api.github.com/users/CalvesGEH/repos",
"events_url": "https://api.github.com/users/CalvesGEH/events{/privacy}",
"received_events_url": "https://api.github.com/users/CalvesGEH/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 45
| 2024-05-03T16:45:32
| 2024-12-18T05:50:45
| 2024-07-03T23:28:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I just setup Ollama on a fresh machine and am running into an issue starting Ollama on larger models.
I am running Ubuntu 22.04.4 LTS with 2 Nvidia Tesla P40 GPUs with Driver Version: 535.161.08 and CUDA Version: 12.2.
Small 8b models work great and have no issues but when I try something like a 34b or a 70b model, I get the error "timed out waiting for llama runner to start: ".
Here are the logs from the "ollama serve" process:
```
user@hostname:~$ ollama serve
time=2024-05-03T16:26:00.169Z level=INFO source=images.go:828 msg="total blobs: 0"
time=2024-05-03T16:26:00.169Z level=INFO source=images.go:835 msg="total unused blobs removed: 0"
time=2024-05-03T16:26:00.169Z level=INFO source=routes.go:1071 msg="Listening on 127.0.0.1:11434 (version 0.1.33)"
time=2024-05-03T16:26:00.170Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama837848792/runners
time=2024-05-03T16:26:04.596Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
time=2024-05-03T16:26:04.596Z level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-05-03T16:26:05.377Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama837848792/runners/cuda_v11/libcudart.so.11.0 count=2
time=2024-05-03T16:26:05.377Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[GIN] 2024/05/03 - 16:26:15 | 200 | 66.48µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/03 - 16:26:15 | 404 | 207.93µs | 127.0.0.1 | POST "/api/show"
time=2024-05-03T16:26:17.456Z level=INFO source=download.go:136 msg="downloading f36b668ebcd3 in 64 297 MB part(s)"
time=2024-05-03T16:27:30.886Z level=INFO source=download.go:178 msg="f36b668ebcd3 part 1 attempt 0 failed: unexpected EOF, retrying in 1s"
time=2024-05-03T16:27:46.457Z level=INFO source=download.go:251 msg="f36b668ebcd3 part 61 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2024-05-03T16:29:08.175Z level=INFO source=download.go:136 msg="downloading 2e0493f67d0c in 1 59 B part(s)"
time=2024-05-03T16:29:09.864Z level=INFO source=download.go:136 msg="downloading c60122cb2728 in 1 132 B part(s)"
time=2024-05-03T16:29:11.547Z level=INFO source=download.go:136 msg="downloading d5981b4f8e77 in 1 382 B part(s)"
[GIN] 2024/05/03 - 16:30:06 | 200 | 3m50s | 127.0.0.1 | POST "/api/pull"
[GIN] 2024/05/03 - 16:30:06 | 200 | 1.142112ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/03 - 16:30:06 | 200 | 291.938µs | 127.0.0.1 | POST "/api/show"
time=2024-05-03T16:30:06.522Z level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-05-03T16:30:06.525Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama837848792/runners/cuda_v11/libcudart.so.11.0 count=2
time=2024-05-03T16:30:06.525Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-03T16:30:07.346Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.available="24297.6 MiB" memory.required.full="19193.1 MiB" memory.required.partial="19193.1 MiB" memory.required.kv="384.0 MiB" memory.weights.total="18028.1 MiB" memory.weights.repeating="17823.0 MiB" memory.weights.nonrepeating="205.1 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="348.0 MiB"
time=2024-05-03T16:30:07.347Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.available="24297.6 MiB" memory.required.full="19193.1 MiB" memory.required.partial="19193.1 MiB" memory.required.kv="384.0 MiB" memory.weights.total="18028.1 MiB" memory.weights.repeating="17823.0 MiB" memory.weights.nonrepeating="205.1 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="348.0 MiB"
time=2024-05-03T16:30:07.347Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-03T16:30:07.347Z level=INFO source=server.go:289 msg="starting llama server" cmd="/tmp/ollama837848792/runners/cuda_v11/ollama_llama_server --model /home/ptmoraski/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 49 --parallel 1 --port 40909"
time=2024-05-03T16:30:07.348Z level=INFO source=sched.go:340 msg="loaded runners" count=1
time=2024-05-03T16:30:07.348Z level=INFO source=server.go:432 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2606,"msg":"logging to file is disabled.","tid":"139735583424512","timestamp":1714753807}
{"build":1,"commit":"952d03d","function":"main","level":"INFO","line":2822,"msg":"build info","tid":"139735583424512","timestamp":1714753807}
{"function":"main","level":"INFO","line":2825,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"139735583424512","timestamp":1714753807,"total_threads":32}
llama_model_loader: loaded meta data with 20 key-value pairs and 435 tensors from /home/ptmoraski/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = codellama
llama_model_loader: - kv 2: llama.context_length u32 = 16384
llama_model_loader: - kv 3: llama.embedding_length u32 = 8192
llama_model_loader: - kv 4: llama.block_count u32 = 48
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 22016
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 64
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: general.quantization_version u32 = 2
llama_model_loader: - type f32: 97 tensors
llama_model_loader: - type q4_0: 337 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V2
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 16384
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 48
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 22016
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 16384
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 34B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 33.74 B
llm_load_print_meta: model size = 17.74 GiB (4.52 BPW)
llm_load_print_meta: general.name = codellama
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: PRE token = 32007 '`*▒'
time=2024-05-03T16:40:07.352Z level=ERROR source=sched.go:346 msg="error loading llama server" error="timed out waiting for llama runner to start: "
[GIN] 2024/05/03 - 16:40:07 | 500 | 10m0s | 127.0.0.1 | POST "/api/chat"
timed out waiting for llama runner to start:
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4131/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5454
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5454/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5454/comments
|
https://api.github.com/repos/ollama/ollama/issues/5454/events
|
https://github.com/ollama/ollama/issues/5454
| 2,387,738,883
|
I_kwDOJ0Z1Ps6OUgED
| 5,454
|
When can we perform function calls like OpenAI?
|
{
"login": "qq1005894049",
"id": 48113255,
"node_id": "MDQ6VXNlcjQ4MTEzMjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/48113255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qq1005894049",
"html_url": "https://github.com/qq1005894049",
"followers_url": "https://api.github.com/users/qq1005894049/followers",
"following_url": "https://api.github.com/users/qq1005894049/following{/other_user}",
"gists_url": "https://api.github.com/users/qq1005894049/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qq1005894049/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qq1005894049/subscriptions",
"organizations_url": "https://api.github.com/users/qq1005894049/orgs",
"repos_url": "https://api.github.com/users/qq1005894049/repos",
"events_url": "https://api.github.com/users/qq1005894049/events{/privacy}",
"received_events_url": "https://api.github.com/users/qq1005894049/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-03T05:57:43
| 2024-07-30T17:25:18
| 2024-07-30T17:25:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
<img width="1341" alt="image" src="https://github.com/ollama/ollama/assets/48113255/70ca615f-aae2-4b48-bd8f-c913b2ede23e">
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5454/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5454/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8110
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8110/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8110/comments
|
https://api.github.com/repos/ollama/ollama/issues/8110/events
|
https://github.com/ollama/ollama/issues/8110
| 2,741,302,826
|
I_kwDOJ0Z1Ps6jZPYq
| 8,110
|
Support llama.cpp's Control Vector Functionality
|
{
"login": "amyb-asu",
"id": 156008468,
"node_id": "U_kgDOCUyAFA",
"avatar_url": "https://avatars.githubusercontent.com/u/156008468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyb-asu",
"html_url": "https://github.com/amyb-asu",
"followers_url": "https://api.github.com/users/amyb-asu/followers",
"following_url": "https://api.github.com/users/amyb-asu/following{/other_user}",
"gists_url": "https://api.github.com/users/amyb-asu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyb-asu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyb-asu/subscriptions",
"organizations_url": "https://api.github.com/users/amyb-asu/orgs",
"repos_url": "https://api.github.com/users/amyb-asu/repos",
"events_url": "https://api.github.com/users/amyb-asu/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyb-asu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 3
| 2024-12-16T04:30:35
| 2024-12-18T00:30:05
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
llama.cpp added support for control vectors a while ago https://github.com/ggerganov/llama.cpp/pull/5970
They can be loaded via `llama_control_vector_load` and `llama_control_vector_apply` which can take a vector in the form of a `.gguf`
https://github.com/ollama/ollama/blob/main/llama/common.h#L645
https://github.com/ollama/ollama/blob/main/llama/llama.h#L571
Example of how llama.cpp normally applies them: https://github.com/ollama/ollama/blob/main/llama/common.cpp#L920-L944
The vectors can be trained and exported to `.gguf` via https://github.com/vgel/repeng/
It would be great if we could load control vectors the same way that adapter loras can be loaded currently.
```dockerfile
FROM ./models/mistralai/Mistral-7B-Instruct-v0.1
CONTROLVECTOR ./vectors/my_control_vector.gguf 1.5
```
I think this feature would open a range of model customization which is currently only possible through the more difficult (much much slower and way more memory intensive) adaptor training methods
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8110/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/64
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/64/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/64/comments
|
https://api.github.com/repos/ollama/ollama/issues/64/events
|
https://github.com/ollama/ollama/pull/64
| 1,796,593,917
|
PR_kwDOJ0Z1Ps5VE8wr
| 64
|
Do not seg fault on client disconnect
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-10T11:46:30
| 2023-07-11T14:19:33
| 2023-07-10T15:00:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/64",
"html_url": "https://github.com/ollama/ollama/pull/64",
"diff_url": "https://github.com/ollama/ollama/pull/64.diff",
"patch_url": "https://github.com/ollama/ollama/pull/64.patch",
"merged_at": "2023-07-10T15:00:44"
}
|
This was nicer to fix on the revised `b2` branch, so this is a pull request into that simplified change
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/64/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/64/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3494
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3494/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3494/comments
|
https://api.github.com/repos/ollama/ollama/issues/3494/events
|
https://github.com/ollama/ollama/pull/3494
| 2,226,051,077
|
PR_kwDOJ0Z1Ps5rvMkF
| 3,494
|
Fail fast if mingw missing on windows
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-04T16:52:06
| 2024-04-04T17:15:44
| 2024-04-04T17:15:40
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3494",
"html_url": "https://github.com/ollama/ollama/pull/3494",
"diff_url": "https://github.com/ollama/ollama/pull/3494.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3494.patch",
"merged_at": "2024-04-04T17:15:40"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3494/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4974
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4974/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4974/comments
|
https://api.github.com/repos/ollama/ollama/issues/4974/events
|
https://github.com/ollama/ollama/issues/4974
| 2,345,961,529
|
I_kwDOJ0Z1Ps6L1Ig5
| 4,974
|
panic: runtime error: invalid memory address or nil pointer dereference
|
{
"login": "wywself",
"id": 8843053,
"node_id": "MDQ6VXNlcjg4NDMwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8843053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wywself",
"html_url": "https://github.com/wywself",
"followers_url": "https://api.github.com/users/wywself/followers",
"following_url": "https://api.github.com/users/wywself/following{/other_user}",
"gists_url": "https://api.github.com/users/wywself/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wywself/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wywself/subscriptions",
"organizations_url": "https://api.github.com/users/wywself/orgs",
"repos_url": "https://api.github.com/users/wywself/repos",
"events_url": "https://api.github.com/users/wywself/events{/privacy}",
"received_events_url": "https://api.github.com/users/wywself/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-11T09:52:18
| 2024-06-12T02:04:31
| 2024-06-12T02:04:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am using Tesla M60, which is on the GPU card support list. However, when I execute the following command to start the model, an error is reported as follows.
```
# ollama run qwen:7b
Error: Post "http://127.0.0.1:11434/api/chat": EOF
```
The log is as follows:

`lscpu` as follows:

How to resolve it? Thank you.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.42
|
{
"login": "wywself",
"id": 8843053,
"node_id": "MDQ6VXNlcjg4NDMwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8843053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wywself",
"html_url": "https://github.com/wywself",
"followers_url": "https://api.github.com/users/wywself/followers",
"following_url": "https://api.github.com/users/wywself/following{/other_user}",
"gists_url": "https://api.github.com/users/wywself/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wywself/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wywself/subscriptions",
"organizations_url": "https://api.github.com/users/wywself/orgs",
"repos_url": "https://api.github.com/users/wywself/repos",
"events_url": "https://api.github.com/users/wywself/events{/privacy}",
"received_events_url": "https://api.github.com/users/wywself/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4974/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/860
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/860/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/860/comments
|
https://api.github.com/repos/ollama/ollama/issues/860/events
|
https://github.com/ollama/ollama/issues/860
| 1,954,877,675
|
I_kwDOJ0Z1Ps50hRDr
| 860
|
bug: the `-v` for `--version` should be capital `-V`
|
{
"login": "coolaj86",
"id": 122831,
"node_id": "MDQ6VXNlcjEyMjgzMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/122831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolaj86",
"html_url": "https://github.com/coolaj86",
"followers_url": "https://api.github.com/users/coolaj86/followers",
"following_url": "https://api.github.com/users/coolaj86/following{/other_user}",
"gists_url": "https://api.github.com/users/coolaj86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coolaj86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coolaj86/subscriptions",
"organizations_url": "https://api.github.com/users/coolaj86/orgs",
"repos_url": "https://api.github.com/users/coolaj86/repos",
"events_url": "https://api.github.com/users/coolaj86/events{/privacy}",
"received_events_url": "https://api.github.com/users/coolaj86/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-20T19:05:59
| 2023-10-20T21:55:16
| 2023-10-20T19:38:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I just noticed that there's a typo in the shorthand of the `--version` flag.
big `-V` is for `--version` (little `-v` is for `--verbose`)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/860/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7010
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7010/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7010/comments
|
https://api.github.com/repos/ollama/ollama/issues/7010/events
|
https://github.com/ollama/ollama/pull/7010
| 2,553,850,810
|
PR_kwDOJ0Z1Ps58-L8j
| 7,010
|
llama: Fix directory for conditional flash attention patch
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-09-27T22:52:40
| 2024-10-10T21:38:26
| 2024-09-30T19:41:33
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7010",
"html_url": "https://github.com/ollama/ollama/pull/7010",
"diff_url": "https://github.com/ollama/ollama/pull/7010.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7010.patch",
"merged_at": null
}
|
Patches are against the llama.cpp directory structure, otherwise sync.sh can't apply them.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7010/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/523
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/523/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/523/comments
|
https://api.github.com/repos/ollama/ollama/issues/523/events
|
https://github.com/ollama/ollama/issues/523
| 1,894,010,611
|
I_kwDOJ0Z1Ps5w5E7z
| 523
|
LLM falcon:text infinity loop
|
{
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasota/followers",
"following_url": "https://api.github.com/users/dcasota/following{/other_user}",
"gists_url": "https://api.github.com/users/dcasota/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcasota/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcasota/subscriptions",
"organizations_url": "https://api.github.com/users/dcasota/orgs",
"repos_url": "https://api.github.com/users/dcasota/repos",
"events_url": "https://api.github.com/users/dcasota/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcasota/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-09-13T08:26:09
| 2023-09-13T14:32:53
| 2023-09-13T13:56:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I was trying to run falcon, but it responds ... weired.
Setup recipe.
```
git clone https://github.com/jmorganca/ollama
cd .\ollama
mkdir ..\.ollama
go generate .\...
go build .
```
Then, start the server component of ollama.
`start "Ollama server component" ollama.exe serve`
Download the selected model.
`ollama.exe pull falcon:text`
Run the model.
`ollama.exe run falcon:text`
I've started a conversation with a simple "Hi".
The output started with listing `date.getDay();` but it didn't stop, not at 7, not at 31, and not at 365...seems an infinity loop.
<img src="https://github.com/jmorganca/ollama/assets/14890243/59f54411-f58d-4f9e-97ef-bce53cb6fedc" alt="image" width="200">
After ctrl-c, and restarting the 2nd and 3rd time a simple Hi, it doesn't list anymore getDay, but responds with extract from letters someone wrote.
What is the purpose of falcon:text?
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/523/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3441
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3441/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3441/comments
|
https://api.github.com/repos/ollama/ollama/issues/3441/events
|
https://github.com/ollama/ollama/issues/3441
| 2,218,582,612
|
I_kwDOJ0Z1Ps6EPOJU
| 3,441
|
Download/Archive and move models offline
|
{
"login": "Solomin0",
"id": 37559666,
"node_id": "MDQ6VXNlcjM3NTU5NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/37559666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Solomin0",
"html_url": "https://github.com/Solomin0",
"followers_url": "https://api.github.com/users/Solomin0/followers",
"following_url": "https://api.github.com/users/Solomin0/following{/other_user}",
"gists_url": "https://api.github.com/users/Solomin0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Solomin0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Solomin0/subscriptions",
"organizations_url": "https://api.github.com/users/Solomin0/orgs",
"repos_url": "https://api.github.com/users/Solomin0/repos",
"events_url": "https://api.github.com/users/Solomin0/events{/privacy}",
"received_events_url": "https://api.github.com/users/Solomin0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-04-01T16:09:15
| 2024-10-21T08:22:19
| 2024-05-10T20:18:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I would like to be able to move ollama models between environments that are offline. There does not seem to be a supported official way to do this.
### How should we solve this?
An ollama archive command would be great! Then the user could just ollama pull from the path the archive is saved in.
### What is the impact of not solving this?
I am currently able to kinda work around by sending the ./ollama folder to a zip file on a machine that has internet access then copying it over. I must only do one model at a time or else the zips start to get impractical as I dont have a way to resolve multiple models.
I am worried about trying to import a new model into an existing environment as copying over the ./ollama folder seems sketchy as is.
### Anything else?
Any advice or other workarounds would be appreciated. Thanks yall.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3441/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2051
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2051/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2051/comments
|
https://api.github.com/repos/ollama/ollama/issues/2051/events
|
https://github.com/ollama/ollama/issues/2051
| 2,088,450,843
|
I_kwDOJ0Z1Ps58ezsb
| 2,051
|
Mixtral : How to connect to the Web
|
{
"login": "ymoymo",
"id": 10183941,
"node_id": "MDQ6VXNlcjEwMTgzOTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/10183941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ymoymo",
"html_url": "https://github.com/ymoymo",
"followers_url": "https://api.github.com/users/ymoymo/followers",
"following_url": "https://api.github.com/users/ymoymo/following{/other_user}",
"gists_url": "https://api.github.com/users/ymoymo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ymoymo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ymoymo/subscriptions",
"organizations_url": "https://api.github.com/users/ymoymo/orgs",
"repos_url": "https://api.github.com/users/ymoymo/repos",
"events_url": "https://api.github.com/users/ymoymo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ymoymo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-18T14:50:38
| 2024-03-11T18:13:47
| 2024-03-11T18:13:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I want to modify scipt to get this service, but I can't find the docker id or name that run Mixtral instance.
sudo docker ps return nothing while Mixtral is running.
Is there somthing I don't understand ?
Thx for any help.
Linux Pop Os
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2051/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3651
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3651/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3651/comments
|
https://api.github.com/repos/ollama/ollama/issues/3651/events
|
https://github.com/ollama/ollama/pull/3651
| 2,243,436,782
|
PR_kwDOJ0Z1Ps5sqeHc
| 3,651
|
If OLLAMA_CONTAINER_MANAGER is set, only install NVIDIA drivers
|
{
"login": "ericcurtin",
"id": 1694275,
"node_id": "MDQ6VXNlcjE2OTQyNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1694275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ericcurtin",
"html_url": "https://github.com/ericcurtin",
"followers_url": "https://api.github.com/users/ericcurtin/followers",
"following_url": "https://api.github.com/users/ericcurtin/following{/other_user}",
"gists_url": "https://api.github.com/users/ericcurtin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ericcurtin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ericcurtin/subscriptions",
"organizations_url": "https://api.github.com/users/ericcurtin/orgs",
"repos_url": "https://api.github.com/users/ericcurtin/repos",
"events_url": "https://api.github.com/users/ericcurtin/events{/privacy}",
"received_events_url": "https://api.github.com/users/ericcurtin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 2
| 2024-04-15T11:33:41
| 2024-04-16T08:22:58
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3651",
"html_url": "https://github.com/ollama/ollama/pull/3651",
"diff_url": "https://github.com/ollama/ollama/pull/3651.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3651.patch",
"merged_at": null
}
|
If installing for containerized environment, we should not have to install ollama binary, configure systemd, install rocm, etc.
Intended to be run like this:
curl -fsSL https://ollama.com/install.sh | OLLAMA_CONTAINER_MANAGER=podman sh
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3651/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3142
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3142/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3142/comments
|
https://api.github.com/repos/ollama/ollama/issues/3142/events
|
https://github.com/ollama/ollama/pull/3142
| 2,186,567,894
|
PR_kwDOJ0Z1Ps5ppF8X
| 3,142
|
doc: faq gpu compatibility
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-14T14:47:11
| 2024-03-21T09:21:35
| 2024-03-21T09:21:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3142",
"html_url": "https://github.com/ollama/ollama/pull/3142",
"diff_url": "https://github.com/ollama/ollama/pull/3142.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3142.patch",
"merged_at": "2024-03-21T09:21:34"
}
|
Add some information about GPU compatibility to the FAQs.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3142/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/13
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/13/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/13/comments
|
https://api.github.com/repos/ollama/ollama/issues/13/events
|
https://github.com/ollama/ollama/pull/13
| 1,779,612,155
|
PR_kwDOJ0Z1Ps5ULXaq
| 13
|
update development.md
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-06-28T19:30:01
| 2023-06-28T19:44:59
| 2023-06-28T19:44:56
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/13",
"html_url": "https://github.com/ollama/ollama/pull/13",
"diff_url": "https://github.com/ollama/ollama/pull/13.diff",
"patch_url": "https://github.com/ollama/ollama/pull/13.patch",
"merged_at": "2023-06-28T19:44:56"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/13/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/13/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1838
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1838/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1838/comments
|
https://api.github.com/repos/ollama/ollama/issues/1838/events
|
https://github.com/ollama/ollama/issues/1838
| 2,069,059,120
|
I_kwDOJ0Z1Ps57U1Yw
| 1,838
|
Cuda Error with 2GB VRAM: `Error: Post "http://127.0.0.1:11434/api/generate": EOF`
|
{
"login": "falaimo",
"id": 29931008,
"node_id": "MDQ6VXNlcjI5OTMxMDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/29931008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/falaimo",
"html_url": "https://github.com/falaimo",
"followers_url": "https://api.github.com/users/falaimo/followers",
"following_url": "https://api.github.com/users/falaimo/following{/other_user}",
"gists_url": "https://api.github.com/users/falaimo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/falaimo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/falaimo/subscriptions",
"organizations_url": "https://api.github.com/users/falaimo/orgs",
"repos_url": "https://api.github.com/users/falaimo/repos",
"events_url": "https://api.github.com/users/falaimo/events{/privacy}",
"received_events_url": "https://api.github.com/users/falaimo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-01-07T09:38:40
| 2024-01-08T21:42:01
| 2024-01-08T21:42:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello everyone, in Ollama version 0.1.18, I'm encountering the error "Error: Post "http://127.0.0.1:11434/api/generate": EOF" when starting Ollama with any model. I think it depends of cuda...
[logs_ollama.txt](https://github.com/jmorganca/ollama/files/13852832/logs_ollama.txt)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1838/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6061
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6061/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6061/comments
|
https://api.github.com/repos/ollama/ollama/issues/6061/events
|
https://github.com/ollama/ollama/issues/6061
| 2,436,319,840
|
I_kwDOJ0Z1Ps6RN0pg
| 6,061
|
[Feature Request] Force function calling for a model
|
{
"login": "mak448a",
"id": 94062293,
"node_id": "U_kgDOBZtG1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/94062293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mak448a",
"html_url": "https://github.com/mak448a",
"followers_url": "https://api.github.com/users/mak448a/followers",
"following_url": "https://api.github.com/users/mak448a/following{/other_user}",
"gists_url": "https://api.github.com/users/mak448a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mak448a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mak448a/subscriptions",
"organizations_url": "https://api.github.com/users/mak448a/orgs",
"repos_url": "https://api.github.com/users/mak448a/repos",
"events_url": "https://api.github.com/users/mak448a/events{/privacy}",
"received_events_url": "https://api.github.com/users/mak448a/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-29T20:43:00
| 2025-01-06T07:17:25
| 2025-01-06T07:17:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Not sure if this belongs in ollama-python or here, but I'll open it here. Could you add a way to use function calling on any model, or is this something that the model itself has to support?
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6061/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6061/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2096
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2096/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2096/comments
|
https://api.github.com/repos/ollama/ollama/issues/2096/events
|
https://github.com/ollama/ollama/issues/2096
| 2,090,734,716
|
I_kwDOJ0Z1Ps58nhR8
| 2,096
|
How is Tinyllama on Ollama trained?
|
{
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/oliverbob/followers",
"following_url": "https://api.github.com/users/oliverbob/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverbob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverbob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverbob/subscriptions",
"organizations_url": "https://api.github.com/users/oliverbob/orgs",
"repos_url": "https://api.github.com/users/oliverbob/repos",
"events_url": "https://api.github.com/users/oliverbob/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverbob/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2024-01-19T14:58:44
| 2024-03-18T20:47:59
| 2024-02-20T22:51:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi everyone, as always, thank you for the great work you have done with this project for the good of humanity. I have tried importing gguf file using tintyllama on huggingface, but when I chat with it using ollama, it returns gibberish talk. But when I download the one from Ollama with ollama pull/run tinyllama, it works great!
Question:
Can I possibly request access to how training data is fed into this tinyllama ollama model since it is open source? One of the reasons I'm interested is on the research on function calling.
Also, there has been a lot of tests and tutorials out there about finetuning this model, but your model at https://ollama.ai/library/tinyllama/tags outperforms them all examples that I find on the internet about tinyllama.
If the source is closed, I want to at least have the idea of how to train it on a custom dataset. I guess, in lay man's term, I want to understand how the Ollama team is able to train this model into the kind of model that it is currently available to ollama users and I want to know why its very different and outperforms the original gguf model found at https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6.
I'd like to be able to use this as a sample to my students as well as to practically teach my own children how a powerful language model such as tinyllama works. I'm also working on a curriculum thesis in collaboration with teachers and school owners and testing whether its practical to integrate AI training and datascience into the field of education, so, your input will be of very great benefit to this little community to advance our research in the field.
I want to highlight the difference that importing the raw gguf, has a fine difference in size of the model, which could explain the valid reason of why the ollama version is smarter. In the following screenshot, I called this gguf from hf "baby." This is an indication to me that someone has done a better job of finetuning it and I want to know how to do it, if someone would be kind enough to give us some guide.

Thank you very much.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2096/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3765
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3765/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3765/comments
|
https://api.github.com/repos/ollama/ollama/issues/3765/events
|
https://github.com/ollama/ollama/issues/3765
| 2,253,970,292
|
I_kwDOJ0Z1Ps6GWNt0
| 3,765
|
CUDA error: out of memory - other VRAM consumers not detected in available memory
|
{
"login": "martinus",
"id": 14386,
"node_id": "MDQ6VXNlcjE0Mzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/martinus",
"html_url": "https://github.com/martinus",
"followers_url": "https://api.github.com/users/martinus/followers",
"following_url": "https://api.github.com/users/martinus/following{/other_user}",
"gists_url": "https://api.github.com/users/martinus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/martinus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martinus/subscriptions",
"organizations_url": "https://api.github.com/users/martinus/orgs",
"repos_url": "https://api.github.com/users/martinus/repos",
"events_url": "https://api.github.com/users/martinus/events{/privacy}",
"received_events_url": "https://api.github.com/users/martinus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 18
| 2024-04-19T20:47:24
| 2024-06-14T22:35:02
| 2024-06-14T22:35:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I try the llama3 model I get out of memory errors. I have 64GB of RAM and 24GB on the GPU.
```
❯ ollama run llama3:70b-instruct-q2_K --verbose "write a constexpr GCD that is not recursive in C++17"
Error: an unknown error was encountered while running the model CUDA error: out of memory
current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:233
hipMalloc((void **) &ptr, look_ahead_size)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"
```
` journalctl -u ollama.service -f` shows
```
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: n_ctx = 2048
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: n_batch = 512
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: n_ubatch = 512
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: freq_base = 500000.0
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: freq_scale = 1
Apr 19 22:43:30 box ollama[641298]: llama_kv_cache_init: ROCm0 KV buffer size = 488.00 MiB
Apr 19 22:43:30 box ollama[641298]: llama_kv_cache_init: ROCm_Host KV buffer size = 152.00 MiB
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: ROCm_Host output buffer size = 0.52 MiB
Apr 19 22:43:30 box ollama[641298]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1088.45 MiB on device 0: cudaMalloc failed: out of memory
Apr 19 22:43:30 box ollama[641298]: ggml_gallocr_reserve_n: failed to allocate ROCm0 buffer of size 1141325824
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: failed to allocate compute buffers
```
Sometimes I get past this, then it fails a few lines later. It then shows a stacktrace, if that helps:
```
Apr 19 22:44:49 box ollama[691627]: 0x00007f17e17d9fa3 in wait4 () from /lib64/libc.so.6
Apr 19 22:44:49 box ollama[691627]: #0 0x00007f17e17d9fa3 in wait4 () from /lib64/libc.so.6
Apr 19 22:44:49 box ollama[691627]: #1 0x00000000024e8084 in ggml_cuda_error(char const*, char const*, char const*, int, char const*) ()
Apr 19 22:44:49 box ollama[691627]: #2 0x00000000024fc062 in ggml_cuda_pool_leg::alloc(unsigned long, unsigned long*) ()
Apr 19 22:44:49 box ollama[691627]: #3 0x00000000024fc790 in ggml_cuda_pool_alloc<__half>::alloc(unsigned long) ()
Apr 19 22:44:49 box ollama[691627]: #4 0x00000000024f2ccf in ggml_cuda_mul_mat(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*) ()
Apr 19 22:44:49 box ollama[691627]: #5 0x00000000024ebae3 in ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) ()
Apr 19 22:44:49 box ollama[691627]: #6 0x00000000024b3888 in ggml_backend_sched_graph_compute_async ()
Apr 19 22:44:49 box ollama[691627]: #7 0x00000000023d2819 in llama_decode ()
Apr 19 22:44:49 box ollama[691627]: #8 0x00000000022df081 in llama_server_context::update_slots() ()
Apr 19 22:44:49 box ollama[691627]: #9 0x00000000022e10ba in llama_server_queue::start_loop() ()
Apr 19 22:44:49 box ollama[691627]: #10 0x00000000022c4e02 in main ()
Apr 19 22:44:49 box ollama[691627]: [Inferior 1 (process 691260) detached]
```
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3765/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3765/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8319
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8319/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8319/comments
|
https://api.github.com/repos/ollama/ollama/issues/8319/events
|
https://github.com/ollama/ollama/pull/8319
| 2,770,591,352
|
PR_kwDOJ0Z1Ps6G0bib
| 8,319
|
Add Safetensor Conversion for Granite Models
|
{
"login": "alex-jw-brooks",
"id": 10740300,
"node_id": "MDQ6VXNlcjEwNzQwMzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/10740300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-jw-brooks",
"html_url": "https://github.com/alex-jw-brooks",
"followers_url": "https://api.github.com/users/alex-jw-brooks/followers",
"following_url": "https://api.github.com/users/alex-jw-brooks/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-jw-brooks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-jw-brooks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-jw-brooks/subscriptions",
"organizations_url": "https://api.github.com/users/alex-jw-brooks/orgs",
"repos_url": "https://api.github.com/users/alex-jw-brooks/repos",
"events_url": "https://api.github.com/users/alex-jw-brooks/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-jw-brooks/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-06T12:48:04
| 2025-01-16T04:20:40
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8319",
"html_url": "https://github.com/ollama/ollama/pull/8319",
"diff_url": "https://github.com/ollama/ollama/pull/8319.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8319.patch",
"merged_at": null
}
|
This PR fixes the unrecognized architecture for converting Granite Models (`GraniteForCausalLM`) for use from safetensors.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8319/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1430
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1430/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1430/comments
|
https://api.github.com/repos/ollama/ollama/issues/1430/events
|
https://github.com/ollama/ollama/issues/1430
| 2,031,816,063
|
I_kwDOJ0Z1Ps55Gw1_
| 1,430
|
coda error 222 after building
|
{
"login": "rhettg",
"id": 50074,
"node_id": "MDQ6VXNlcjUwMDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/50074?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rhettg",
"html_url": "https://github.com/rhettg",
"followers_url": "https://api.github.com/users/rhettg/followers",
"following_url": "https://api.github.com/users/rhettg/following{/other_user}",
"gists_url": "https://api.github.com/users/rhettg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rhettg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rhettg/subscriptions",
"organizations_url": "https://api.github.com/users/rhettg/orgs",
"repos_url": "https://api.github.com/users/rhettg/repos",
"events_url": "https://api.github.com/users/rhettg/events{/privacy}",
"received_events_url": "https://api.github.com/users/rhettg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2023-12-08T02:14:32
| 2024-02-01T23:15:41
| 2024-02-01T23:15:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This might be a llama.cpp question, but I'm struggling to get Ollama to work when I build it myself.
The release builds work fine for me:
```console
$ sudo -u ollama /usr/bin/ollama serve
2023/12/07 17:52:41 images.go:779: total blobs: 10
2023/12/07 17:52:41 images.go:786: total unused blobs removed: 0
2023/12/07 17:52:41 routes.go:777: Listening on 127.0.0.1:11434 (version 0.1.11)
2023/12/07 17:53:08 llama.go:291: 9973 MB VRAM available, loading up to 60 GPU layers
2023/12/07 17:53:08 llama.go:420: starting llama runner
2023/12/07 17:53:08 llama.go:478: waiting for llama runner to start responding
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6
{"timestamp":1702000389,"level":"INFO","function":"main","line":1323,"message":"build info","build":219,"commit":"9e70cc0"}
{"timestamp":1702000389,"level":"INFO","function":"main","line":1325,"message":"system info","n_threads":8,"n_threads_batch":-1,"total_threads":16,"system_info":"AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
```
But when I build it, I see this:
```console
CUDA error 222 at /home/rhettg/Projects/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7003: the provided PTX was compiled with an unsupported toolchain.
current device: 0
2023/12/07 17:58:10 llama.go:441: 222 at /home/rhettg/Projects/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7003: the provided PTX was compiled with an unsupported toolchain.
current device: 0
2023/12/07 17:58:10 llama.go:449: error starting llama runner: llama runner process has terminated
2023/12/07 17:58:10 llama.go:515: llama runner stopped successfully
```
<details><summary>More version details:</summary>
```console
$ git show HEAD
commit dd427f499a65b2357f6b47ab3eed62478f42397a (HEAD -> main, origin/main, origin/HEAD)
Merge: 2ae573c 02fe26c
Author: Matt Williams <m@technovangelist.com>
Date: Thu Dec 7 14:42:24 2023 -0800
Merge pull request #1419 from jmorganca/mattw/typescript-simplechat
Simple chat example for typescript
$ nvidia-smi
Thu Dec 7 17:59:22 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3080 Off | 00000000:2B:00.0 On | N/A |
| 0% 37C P8 23W / 320W | 2MiB / 10240MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
$ /usr/local/cuda/bin/nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Nov__3_17:16:49_PDT_2023
Cuda compilation tools, release 12.3, V12.3.103
Build cuda_12.3.r12.3/compiler.33492891_0
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
$ dpkg -l | grep nvidia
ii libnvidia-cfg1-535:amd64 535.129.03-0ubuntu0.22.04.1 amd64 NVIDIA binary OpenGL/GLX configuration library
ii libnvidia-common-535 535.129.03-0ubuntu1 all Shared files used by the NVIDIA libraries
ii libnvidia-compute-535:amd64 535.129.03-0ubuntu0.22.04.1 amd64 NVIDIA libcompute package
ii libnvidia-compute-535:i386 535.129.03-0ubuntu0.22.04.1 i386 NVIDIA libcompute package
ii libnvidia-decode-535:amd64 535.129.03-0ubuntu0.22.04.1 amd64 NVIDIA Video Decoding runtime libraries
ii libnvidia-decode-535:i386 535.129.03-0ubuntu0.22.04.1 i386 NVIDIA Video Decoding runtime libraries
ii libnvidia-encode-535:amd64 535.129.03-0ubuntu0.22.04.1 amd64 NVENC Video Encoding runtime library
ii libnvidia-encode-535:i386 535.129.03-0ubuntu0.22.04.1 i386 NVENC Video Encoding runtime library
ii libnvidia-extra-535:amd64 535.129.03-0ubuntu0.22.04.1 amd64 Extra libraries for the NVIDIA driver
ii libnvidia-fbc1-535:amd64 535.129.03-0ubuntu0.22.04.1 amd64 NVIDIA OpenGL-based Framebuffer Capture runtime library
ii libnvidia-fbc1-535:i386 535.129.03-0ubuntu0.22.04.1 i386 NVIDIA OpenGL-based Framebuffer Capture runtime library
ii libnvidia-gl-535:amd64 535.129.03-0ubuntu0.22.04.1 amd64 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
ii libnvidia-gl-535:i386 535.129.03-0ubuntu0.22.04.1 i386 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
rc linux-modules-nvidia-535-6.2.0-26-generic 6.2.0-26.26~22.04.1+2 amd64 Linux kernel nvidia modules for version 6.2.0-26
ii linux-modules-nvidia-535-6.2.0-36-generic 6.2.0-36.37~22.04.1+1 amd64 Linux kernel nvidia modules for version 6.2.0-36
ii linux-modules-nvidia-535-6.2.0-37-generic 6.2.0-37.38~22.04.1 amd64 Linux kernel nvidia modules for version 6.2.0-37
ii linux-modules-nvidia-535-generic-hwe-22.04 6.2.0-37.38~22.04.1 amd64 Extra drivers for nvidia-535 for the generic-hwe-22.04 flavour
rc linux-objects-nvidia-535-6.2.0-26-generic 6.2.0-26.26~22.04.1+2 amd64 Linux kernel nvidia modules for version 6.2.0-26 (objects)
ii linux-objects-nvidia-535-6.2.0-36-generic 6.2.0-36.37~22.04.1+1 amd64 Linux kernel nvidia modules for version 6.2.0-36 (objects)
ii linux-objects-nvidia-535-6.2.0-37-generic 6.2.0-37.38~22.04.1 amd64 Linux kernel nvidia modules for version 6.2.0-37 (objects)
ii linux-signatures-nvidia-6.2.0-36-generic 6.2.0-36.37~22.04.1+1 amd64 Linux kernel signatures for nvidia modules for version 6.2.0-36-generic
ii linux-signatures-nvidia-6.2.0-37-generic 6.2.0-37.38~22.04.1 amd64 Linux kernel signatures for nvidia modules for version 6.2.0-37-generic
ii nvidia-compute-utils-535 535.129.03-0ubuntu0.22.04.1 amd64 NVIDIA compute utilities
rc nvidia-cuda-toolkit 11.5.1-1ubuntu1 amd64 NVIDIA CUDA development toolkit
ii nvidia-dkms-535 535.129.03-0ubuntu1 amd64 NVIDIA DKMS package
ii nvidia-driver-535 535.129.03-0ubuntu0.22.04.1 amd64 NVIDIA driver metapackage
ii nvidia-fs 2.18.3-1 amd64 NVIDIA filesystem for GPUDirect Storage
ii nvidia-fs-dkms 2.18.3-1 amd64 NVIDIA filesystem DKMS package
ii nvidia-gds 12.3.1-1 amd64 GPU Direct Storage meta-package
ii nvidia-gds-12-3 12.3.1-1 amd64 GPU Direct Storage 12.3 meta-package
ii nvidia-kernel-common-535 535.129.03-0ubuntu1 amd64 Shared files used with the kernel module
ii nvidia-kernel-source-535 535.129.03-0ubuntu0.22.04.1 amd64 NVIDIA kernel source package
ii nvidia-prime 0.8.17.1 all Tools to enable NVIDIA's Prime
ii nvidia-settings 545.23.08-0ubuntu1 amd64 Tool for configuring the NVIDIA graphics driver
ii nvidia-utils-535 535.129.03-0ubuntu0.22.04.1 amd64 NVIDIA driver support binaries
ii screen-resolution-extra 0.18.2 all Extension for the nvidia-settings control panel
ii xserver-xorg-video-nvidia-535 535.129.03-0ubuntu0.22.04.1 amd64 NVIDIA binary Xorg driver
```
</details>
I did recently upgrade my Nvidia toolchain, but as far as I can tell I don't have any of the old versions left around. It looks like it chose the correct version of `nvcc`:
<details><summary>/home/rhettg/Projects/ollama/llm/llama.cpp/gguf/build/cuda/CMakeFiles/3.22.1/CMakeCUDACompiler.cmake</summary>
```make
$ cat CMakeCUDACompiler.cmake
set(CMAKE_CUDA_COMPILER "/usr/local/cuda/bin/nvcc")
set(CMAKE_CUDA_HOST_COMPILER "")
set(CMAKE_CUDA_HOST_LINK_LAUNCHER "/usr/bin/g++")
set(CMAKE_CUDA_COMPILER_ID "NVIDIA")
set(CMAKE_CUDA_COMPILER_VERSION "12.3.103")
set(CMAKE_CUDA_DEVICE_LINKER "/usr/local/cuda/bin/nvlink")
set(CMAKE_CUDA_FATBINARY "/usr/local/cuda/bin/fatbinary")
set(CMAKE_CUDA_STANDARD_COMPUTED_DEFAULT "17")
set(CMAKE_CUDA_EXTENSIONS_COMPUTED_DEFAULT "ON")
set(CMAKE_CUDA_COMPILE_FEATURES "cuda_std_03;cuda_std_11;cuda_std_14;cuda_std_17")
set(CMAKE_CUDA03_COMPILE_FEATURES "cuda_std_03")
set(CMAKE_CUDA11_COMPILE_FEATURES "cuda_std_11")
set(CMAKE_CUDA14_COMPILE_FEATURES "cuda_std_14")
set(CMAKE_CUDA17_COMPILE_FEATURES "cuda_std_17")
set(CMAKE_CUDA20_COMPILE_FEATURES "")
set(CMAKE_CUDA23_COMPILE_FEATURES "")
set(CMAKE_CUDA_PLATFORM_ID "Linux")
set(CMAKE_CUDA_SIMULATE_ID "GNU")
set(CMAKE_CUDA_COMPILER_FRONTEND_VARIANT "")
set(CMAKE_CUDA_SIMULATE_VERSION "11.4")
set(CMAKE_CUDA_COMPILER_ENV_VAR "CUDACXX")
set(CMAKE_CUDA_HOST_COMPILER_ENV_VAR "CUDAHOSTCXX")
set(CMAKE_CUDA_COMPILER_LOADED 1)
set(CMAKE_CUDA_COMPILER_ID_RUN 1)
set(CMAKE_CUDA_SOURCE_FILE_EXTENSIONS cu)
set(CMAKE_CUDA_LINKER_PREFERENCE 15)
set(CMAKE_CUDA_LINKER_PREFERENCE_PROPAGATES 1)
set(CMAKE_CUDA_SIZEOF_DATA_PTR "8")
set(CMAKE_CUDA_COMPILER_ABI "ELF")
set(CMAKE_CUDA_BYTE_ORDER "LITTLE_ENDIAN")
set(CMAKE_CUDA_LIBRARY_ARCHITECTURE "x86_64-linux-gnu")
if(CMAKE_CUDA_SIZEOF_DATA_PTR)
set(CMAKE_SIZEOF_VOID_P "${CMAKE_CUDA_SIZEOF_DATA_PTR}")
endif()
if(CMAKE_CUDA_COMPILER_ABI)
set(CMAKE_INTERNAL_PLATFORM_ABI "${CMAKE_CUDA_COMPILER_ABI}")
endif()
if(CMAKE_CUDA_LIBRARY_ARCHITECTURE)
set(CMAKE_LIBRARY_ARCHITECTURE "x86_64-linux-gnu")
endif()
set(CMAKE_CUDA_COMPILER_TOOLKIT_ROOT "/usr/local/cuda")
set(CMAKE_CUDA_COMPILER_TOOLKIT_LIBRARY_ROOT "/usr/local/cuda")
set(CMAKE_CUDA_COMPILER_LIBRARY_ROOT "/usr/local/cuda")
set(CMAKE_CUDA_TOOLKIT_INCLUDE_DIRECTORIES "/usr/local/cuda/targets/x86_64-linux/include")
set(CMAKE_CUDA_HOST_IMPLICIT_LINK_LIBRARIES "")
set(CMAKE_CUDA_HOST_IMPLICIT_LINK_DIRECTORIES "/usr/local/cuda/targets/x86_64-linux/lib/stubs;/usr/local/cuda/targets/x86_64-linux/lib")
set(CMAKE_CUDA_HOST_IMPLICIT_LINK_FRAMEWORK_DIRECTORIES "")
set(CMAKE_CUDA_IMPLICIT_INCLUDE_DIRECTORIES "/usr/include/c++/11;/usr/include/x86_64-linux-gnu/c++/11;/usr/include/c++/11/backward;/usr/lib/gcc/x86_64-linux-gnu/11/include;/usr/local/include;/usr/include/x86_64-linux-gnu;/usr/include")
set(CMAKE_CUDA_IMPLICIT_LINK_LIBRARIES "stdc++;m;gcc_s;gcc;c;gcc_s;gcc")
set(CMAKE_CUDA_IMPLICIT_LINK_DIRECTORIES "/usr/local/cuda/targets/x86_64-linux/lib/stubs;/usr/local/cuda/targets/x86_64-linux/lib;/usr/lib/gcc/x86_64-linux-gnu/11;/usr/lib/x86_64-linux-gnu;/usr/lib;/lib/x86_64-linux-gnu;/lib")
set(CMAKE_CUDA_IMPLICIT_LINK_FRAMEWORK_DIRECTORIES "")
set(CMAKE_CUDA_RUNTIME_LIBRARY_DEFAULT "STATIC")
set(CMAKE_LINKER "/usr/bin/ld")
set(CMAKE_AR "/usr/bin/ar")
set(CMAKE_MT "")
```
</details>
Any helping understand what's wrong here would be appreciated.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1430/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8662
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8662/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8662/comments
|
https://api.github.com/repos/ollama/ollama/issues/8662/events
|
https://github.com/ollama/ollama/pull/8662
| 2,818,376,666
|
PR_kwDOJ0Z1Ps6JXtvk
| 8,662
|
Update README.md Adding DeepSeek to the table of models
|
{
"login": "teymuur",
"id": 64795612,
"node_id": "MDQ6VXNlcjY0Nzk1NjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/64795612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/teymuur",
"html_url": "https://github.com/teymuur",
"followers_url": "https://api.github.com/users/teymuur/followers",
"following_url": "https://api.github.com/users/teymuur/following{/other_user}",
"gists_url": "https://api.github.com/users/teymuur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/teymuur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/teymuur/subscriptions",
"organizations_url": "https://api.github.com/users/teymuur/orgs",
"repos_url": "https://api.github.com/users/teymuur/repos",
"events_url": "https://api.github.com/users/teymuur/events{/privacy}",
"received_events_url": "https://api.github.com/users/teymuur/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2025-01-29T14:25:25
| 2025-01-29T14:33:35
| 2025-01-29T14:33:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8662",
"html_url": "https://github.com/ollama/ollama/pull/8662",
"diff_url": "https://github.com/ollama/ollama/pull/8662.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8662.patch",
"merged_at": null
}
|
This is just a minor change, I added DeepSeek R1 to the model library table. Only changed `README.md`
|
{
"login": "teymuur",
"id": 64795612,
"node_id": "MDQ6VXNlcjY0Nzk1NjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/64795612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/teymuur",
"html_url": "https://github.com/teymuur",
"followers_url": "https://api.github.com/users/teymuur/followers",
"following_url": "https://api.github.com/users/teymuur/following{/other_user}",
"gists_url": "https://api.github.com/users/teymuur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/teymuur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/teymuur/subscriptions",
"organizations_url": "https://api.github.com/users/teymuur/orgs",
"repos_url": "https://api.github.com/users/teymuur/repos",
"events_url": "https://api.github.com/users/teymuur/events{/privacy}",
"received_events_url": "https://api.github.com/users/teymuur/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8662/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5631
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5631/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5631/comments
|
https://api.github.com/repos/ollama/ollama/issues/5631/events
|
https://github.com/ollama/ollama/pull/5631
| 2,403,403,309
|
PR_kwDOJ0Z1Ps51G6eZ
| 5,631
|
Refactor linux packaging
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-07-11T14:56:45
| 2024-08-17T17:16:53
| 2024-08-17T17:16:45
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5631",
"html_url": "https://github.com/ollama/ollama/pull/5631",
"diff_url": "https://github.com/ollama/ollama/pull/5631.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5631.patch",
"merged_at": null
}
|
This adjusts linux to follow a similar model to windows with a discrete archive (zip/tgz) to cary the primary executable, and dependent libraries. Runners are still carried as payloads inside the main binary.
As Darwin has no significant dependent libraries, it still functions as a discrete stand-alone executable carrying the runners as payloads.
Replaces #5582
Fixes #5737
Fixes #2361
Fixes #6144
```
% ls -lh dist/ollama-linux-amd64.tgz
-rw-r--r-- 1 daniel staff 1.6G Jul 10 18:14 dist/ollama-linux-amd64.tgz
```
```
% ls -F
cuda/ ollama* rocm/
% du -sh .
7.8G .
% du -sh *
369M cuda
245M ollama
7.2G rocm
```
```
% find /tmp/ollama3466897970/ -type f | xargs ls -lh
-rwxrwxr-x 1 daniel daniel 7 Jul 11 08:00 /tmp/ollama3466897970/ollama.pid
-rwxr-xr-x 1 daniel daniel 808K Jul 11 08:00 /tmp/ollama3466897970/runners/cpu_avx2/libggml.so
-rwxr-xr-x 1 daniel daniel 1.9M Jul 11 08:00 /tmp/ollama3466897970/runners/cpu_avx2/libllama.so
-rwxr-xr-x 1 daniel daniel 1.8M Jul 11 08:00 /tmp/ollama3466897970/runners/cpu_avx2/ollama_llama_server
-rwxr-xr-x 1 daniel daniel 790K Jul 11 08:00 /tmp/ollama3466897970/runners/cpu_avx/libggml.so
-rwxr-xr-x 1 daniel daniel 1.9M Jul 11 08:00 /tmp/ollama3466897970/runners/cpu_avx/libllama.so
-rwxr-xr-x 1 daniel daniel 1.8M Jul 11 08:00 /tmp/ollama3466897970/runners/cpu_avx/ollama_llama_server
-rwxr-xr-x 1 daniel daniel 714K Jul 11 08:00 /tmp/ollama3466897970/runners/cpu/libggml.so
-rwxr-xr-x 1 daniel daniel 1.9M Jul 11 08:00 /tmp/ollama3466897970/runners/cpu/libllama.so
-rwxr-xr-x 1 daniel daniel 1.8M Jul 11 08:00 /tmp/ollama3466897970/runners/cpu/ollama_llama_server
-rwxr-xr-x 1 daniel daniel 316M Jul 11 08:00 /tmp/ollama3466897970/runners/cuda_v11/libggml.so
-rwxr-xr-x 1 daniel daniel 1.9M Jul 11 08:00 /tmp/ollama3466897970/runners/cuda_v11/libllama.so
-rwxr-xr-x 1 daniel daniel 1.8M Jul 11 08:00 /tmp/ollama3466897970/runners/cuda_v11/ollama_llama_server
-rwxr-xr-x 1 daniel daniel 298M Jul 11 08:00 /tmp/ollama3466897970/runners/rocm_v60101/libggml.so
-rwxr-xr-x 1 daniel daniel 1.9M Jul 11 08:00 /tmp/ollama3466897970/runners/rocm_v60101/libllama.so
-rwxr-xr-x 1 daniel daniel 1.7M Jul 11 08:00 /tmp/ollama3466897970/runners/rocm_v60101/ollama_llama_server
```
<details>
<summary>ldd output</summary>
```
% find /tmp/ollama3466897970/runners -type f | LD_LIBRARY_PATH=/home/daniel/ollama/cuda:/home/daniel/ollama/rocm:/tmp/ollama3466897970/runners/cuda_v11:/tmp/ollama3466897970/runners xargs ldd
/tmp/ollama3466897970/runners/cuda_v11/libggml.so:
linux-vdso.so.1 (0x00007ffe776c2000)
libcudart.so.11.0 (0x00007f5169400000)
libcublas.so.11 (0x00007f5161c00000)
libcublasLt.so.11 (0x00007f5151000000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f514f800000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5169781000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f516977c000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5169777000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f5169772000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f514f5d4000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f5169752000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f514f3ac000)
/lib64/ld-linux-x86-64.so.2 (0x00007f517d482000)
/tmp/ollama3466897970/runners/cuda_v11/ollama_llama_server:
linux-vdso.so.1 (0x00007ffd38567000)
libllama.so => /tmp/ollama3466897970/runners/cuda_v11/libllama.so (0x00007f6668611000)
libggml.so => /tmp/ollama3466897970/runners/cuda_v11/libggml.so (0x00007f6654a14000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f66547cf000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f66546e8000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f66546c6000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f66546c1000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6654499000)
/lib64/ld-linux-x86-64.so.2 (0x00007f66687b3000)
libcudart.so.11.0 (0x00007f6654000000)
libcublas.so.11 (0x00007f664c800000)
libcublasLt.so.11 (0x00007f663bc00000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f663a400000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6654492000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f665448d000)
/tmp/ollama3466897970/runners/cuda_v11/libllama.so:
linux-vdso.so.1 (0x00007fffed5fd000)
libggml.so => /tmp/ollama3466897970/runners/cuda_v11/libggml.so (0x00007f69584ac000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f6958267000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f6958180000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f6958160000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6957f38000)
/lib64/ld-linux-x86-64.so.2 (0x00007f696c24b000)
libcudart.so.11.0 (0x00007f6957c00000)
libcublas.so.11 (0x00007f6950400000)
libcublasLt.so.11 (0x00007f693f800000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f693e000000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6957f31000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6957f2a000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6957f25000)
/tmp/ollama3466897970/runners/cpu_avx/libggml.so:
linux-vdso.so.1 (0x00007fffb28b7000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f183d866000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f183d63a000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f183d61a000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f183d615000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f183d3ed000)
/lib64/ld-linux-x86-64.so.2 (0x00007f183daaa000)
/tmp/ollama3466897970/runners/cpu_avx/ollama_llama_server:
linux-vdso.so.1 (0x00007ffdac5a0000)
libllama.so => /tmp/ollama3466897970/runners/cuda_v11/libllama.so (0x00007ff44d0a6000)
libggml.so => /tmp/ollama3466897970/runners/cuda_v11/libggml.so (0x00007ff4394a9000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007ff439264000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ff43917d000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ff43915b000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ff439156000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ff438f2e000)
/lib64/ld-linux-x86-64.so.2 (0x00007ff44d248000)
libcudart.so.11.0 (0x00007ff438c00000)
libcublas.so.11 (0x00007ff431400000)
libcublasLt.so.11 (0x00007ff420800000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007ff41f000000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007ff438f27000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ff438f22000)
/tmp/ollama3466897970/runners/cpu_avx/libllama.so:
linux-vdso.so.1 (0x00007ffd66ff5000)
libggml.so => /tmp/ollama3466897970/runners/cuda_v11/libggml.so (0x00007fa2366f3000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fa2364ae000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fa2363c7000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fa2363a7000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa23617f000)
/lib64/ld-linux-x86-64.so.2 (0x00007fa24a490000)
libcudart.so.11.0 (0x00007fa235e00000)
libcublas.so.11 (0x00007fa22e600000)
libcublasLt.so.11 (0x00007fa21da00000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007fa21c200000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fa236178000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fa236171000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fa23616c000)
/tmp/ollama3466897970/runners/rocm_v60101/libggml.so:
linux-vdso.so.1 (0x00007fff7d7fe000)
libhipblas.so.2 => /home/daniel/ollama/rocm/libhipblas.so.2 (0x00007ff4e98b2000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ff4e97b2000)
librocblas.so.4 => /home/daniel/ollama/rocm/librocblas.so.4 (0x00007ff4b5418000)
libamdhip64.so.6 => /home/daniel/ollama/rocm/libamdhip64.so.6 (0x00007ff4b397e000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007ff4b3750000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ff4b3730000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ff4b372b000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ff4b3503000)
librocsolver.so.0 => /home/daniel/ollama/rocm/librocsolver.so.0 (0x00007ff460fbf000)
/lib64/ld-linux-x86-64.so.2 (0x00007ff4fc33f000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ff460fba000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007ff460fb3000)
libamd_comgr.so.2 => /home/daniel/ollama/rocm/libamd_comgr.so.2 (0x00007ff458287000)
libhsa-runtime64.so.1 => /home/daniel/ollama/rocm/libhsa-runtime64.so.1 (0x00007ff457f9f000)
libnuma.so.1 => /lib/x86_64-linux-gnu/libnuma.so.1 (0x00007ff457f92000)
librocsparse.so.1 => /home/daniel/ollama/rocm/librocsparse.so.1 (0x00007ff406fd4000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007ff406fb6000)
libtinfo.so.5 => /home/daniel/ollama/rocm/libtinfo.so.5 (0x00007ff406c00000)
libelf.so.1 => /lib/x86_64-linux-gnu/libelf.so.1 (0x00007ff406f98000)
librocprofiler-register.so.0 => /home/daniel/ollama/rocm/librocprofiler-register.so.0 (0x00007ff406ebb000)
libdrm.so.2 => /home/daniel/ollama/rocm/libdrm.so.2 (0x00007ff406ea4000)
libdrm_amdgpu.so.1 => /home/daniel/ollama/rocm/libdrm_amdgpu.so.1 (0x00007ff406e97000)
/tmp/ollama3466897970/runners/rocm_v60101/ollama_llama_server:
linux-vdso.so.1 (0x00007ffdfab57000)
libllama.so => /tmp/ollama3466897970/runners/cuda_v11/libllama.so (0x00007f69dd1dd000)
libggml.so => /tmp/ollama3466897970/runners/cuda_v11/libggml.so (0x00007f69c95e0000)
libhipblas.so.2 => /home/daniel/ollama/rocm/libhipblas.so.2 (0x00007f69c951b000)
librocblas.so.4 => /home/daniel/ollama/rocm/librocblas.so.4 (0x00007f6995181000)
libamdhip64.so.6 => /home/daniel/ollama/rocm/libamdhip64.so.6 (0x00007f69936e5000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f69934a0000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f69933b9000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f6993399000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6993394000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f699316c000)
/lib64/ld-linux-x86-64.so.2 (0x00007f69dd37f000)
libcudart.so.11.0 (0x00007f6992e00000)
libcublas.so.11 (0x00007f698b600000)
libcublasLt.so.11 (0x00007f697aa00000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f6979200000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6993165000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f699315e000)
librocsolver.so.0 => /home/daniel/ollama/rocm/librocsolver.so.0 (0x00007f6926cbc000)
libamd_comgr.so.2 => /home/daniel/ollama/rocm/libamd_comgr.so.2 (0x00007f691df90000)
libhsa-runtime64.so.1 => /home/daniel/ollama/rocm/libhsa-runtime64.so.1 (0x00007f691dca8000)
libnuma.so.1 => /lib/x86_64-linux-gnu/libnuma.so.1 (0x00007f6993151000)
librocsparse.so.1 => /home/daniel/ollama/rocm/librocsparse.so.1 (0x00007f68cccea000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f6993133000)
libtinfo.so.5 => /home/daniel/ollama/rocm/libtinfo.so.5 (0x00007f68cca00000)
libelf.so.1 => /lib/x86_64-linux-gnu/libelf.so.1 (0x00007f6993115000)
librocprofiler-register.so.0 => /home/daniel/ollama/rocm/librocprofiler-register.so.0 (0x00007f6992d23000)
libdrm.so.2 => /home/daniel/ollama/rocm/libdrm.so.2 (0x00007f69930fc000)
libdrm_amdgpu.so.1 => /home/daniel/ollama/rocm/libdrm_amdgpu.so.1 (0x00007f69930ef000)
/tmp/ollama3466897970/runners/rocm_v60101/libllama.so:
linux-vdso.so.1 (0x00007ffdd17dc000)
libggml.so => /tmp/ollama3466897970/runners/cuda_v11/libggml.so (0x00007faadeff1000)
libhipblas.so.2 => /home/daniel/ollama/rocm/libhipblas.so.2 (0x00007faadef2c000)
librocblas.so.4 => /home/daniel/ollama/rocm/librocblas.so.4 (0x00007faaaab92000)
libamdhip64.so.6 => /home/daniel/ollama/rocm/libamdhip64.so.6 (0x00007faaa90f8000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007faaa8eb1000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007faaa8dca000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007faaa8daa000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007faaa8b82000)
/lib64/ld-linux-x86-64.so.2 (0x00007faaf2d9e000)
libcudart.so.11.0 (0x00007faaa8800000)
libcublas.so.11 (0x00007faaa1000000)
libcublasLt.so.11 (0x00007faa90400000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007faa8ec00000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007faaa8b7b000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007faaa8b76000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007faaa8b71000)
librocsolver.so.0 => /home/daniel/ollama/rocm/librocsolver.so.0 (0x00007faa3c6bc000)
libamd_comgr.so.2 => /home/daniel/ollama/rocm/libamd_comgr.so.2 (0x00007faa33990000)
libhsa-runtime64.so.1 => /home/daniel/ollama/rocm/libhsa-runtime64.so.1 (0x00007faa336a8000)
libnuma.so.1 => /lib/x86_64-linux-gnu/libnuma.so.1 (0x00007faaa8b62000)
librocsparse.so.1 => /home/daniel/ollama/rocm/librocsparse.so.1 (0x00007fa9e26ea000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007faaa8b44000)
libtinfo.so.5 => /home/daniel/ollama/rocm/libtinfo.so.5 (0x00007fa9e2400000)
libelf.so.1 => /lib/x86_64-linux-gnu/libelf.so.1 (0x00007faaa8b26000)
librocprofiler-register.so.0 => /home/daniel/ollama/rocm/librocprofiler-register.so.0 (0x00007faaa8723000)
libdrm.so.2 => /home/daniel/ollama/rocm/libdrm.so.2 (0x00007faaa8b0f000)
libdrm_amdgpu.so.1 => /home/daniel/ollama/rocm/libdrm_amdgpu.so.1 (0x00007faaa8b00000)
/tmp/ollama3466897970/runners/cpu_avx2/libggml.so:
linux-vdso.so.1 (0x00007fff4517d000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fbbfefe5000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fbbfedb9000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fbbfed99000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fbbfed94000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbbfeb6c000)
/lib64/ld-linux-x86-64.so.2 (0x00007fbbff22d000)
/tmp/ollama3466897970/runners/cpu_avx2/ollama_llama_server:
linux-vdso.so.1 (0x00007ffc10be7000)
libllama.so => /tmp/ollama3466897970/runners/cuda_v11/libllama.so (0x00007fe6369f2000)
libggml.so => /tmp/ollama3466897970/runners/cuda_v11/libggml.so (0x00007fe622df5000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fe622bb0000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fe622ac9000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fe622aa7000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fe622aa2000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe62287a000)
/lib64/ld-linux-x86-64.so.2 (0x00007fe636b94000)
libcudart.so.11.0 (0x00007fe622400000)
libcublas.so.11 (0x00007fe61ac00000)
libcublasLt.so.11 (0x00007fe60a000000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007fe608800000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fe622873000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fe62286e000)
/tmp/ollama3466897970/runners/cpu_avx2/libllama.so:
linux-vdso.so.1 (0x00007fffe2ba2000)
libggml.so => /tmp/ollama3466897970/runners/cuda_v11/libggml.so (0x00007f84ecdfc000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f84ecbb7000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f84ecad0000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f84ecab0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f84ec888000)
/lib64/ld-linux-x86-64.so.2 (0x00007f8500b99000)
libcudart.so.11.0 (0x00007f84ec400000)
libcublas.so.11 (0x00007f84e4c00000)
libcublasLt.so.11 (0x00007f84d4000000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f84d2800000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f84ec881000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f84ec87a000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f84ec875000)
/tmp/ollama3466897970/runners/cpu/libggml.so:
linux-vdso.so.1 (0x00007ffe3b93f000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f24c8e90000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f24c8c64000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f24c8c44000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f24c8c3f000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f24c8a17000)
/lib64/ld-linux-x86-64.so.2 (0x00007f24c90c2000)
/tmp/ollama3466897970/runners/cpu/ollama_llama_server:
linux-vdso.so.1 (0x00007ffd0e1fc000)
libllama.so => /tmp/ollama3466897970/runners/cuda_v11/libllama.so (0x00007f15bd29b000)
libggml.so => /tmp/ollama3466897970/runners/cuda_v11/libggml.so (0x00007f15a969e000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f15a9459000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f15a9372000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f15a9350000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f15a934b000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f15a9123000)
/lib64/ld-linux-x86-64.so.2 (0x00007f15bd43d000)
libcudart.so.11.0 (0x00007f15a8e00000)
libcublas.so.11 (0x00007f15a1600000)
libcublasLt.so.11 (0x00007f1590a00000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f158f200000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f15a911c000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f15a9117000)
/tmp/ollama3466897970/runners/cpu/libllama.so:
linux-vdso.so.1 (0x00007fff511f7000)
libggml.so => /tmp/ollama3466897970/runners/cuda_v11/libggml.so (0x00007f5810479000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f5810234000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f581014d000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f581012d000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f580ff05000)
/lib64/ld-linux-x86-64.so.2 (0x00007f5824216000)
libcudart.so.11.0 (0x00007f580fc00000)
libcublas.so.11 (0x00007f5808400000)
libcublasLt.so.11 (0x00007f57f7800000)
libcuda.so.1 => /lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f57f6000000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f580fefe000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f580fef7000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f580fef2000)
```
</details>
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5631/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1112
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1112/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1112/comments
|
https://api.github.com/repos/ollama/ollama/issues/1112/events
|
https://github.com/ollama/ollama/issues/1112
| 1,991,164,963
|
I_kwDOJ0Z1Ps52rsQj
| 1,112
|
Support `ollama create` with PyTorch
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-11-13T17:53:57
| 2024-05-06T23:26:01
| 2024-05-06T23:26:01
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, create a model via a `Modelfile` supports importing GGUF format model binaries. Ollama should also support importing PyTorch models directly via `ollama create`
Related:
* https://github.com/jmorganca/ollama/issues/1037
* https://github.com/jmorganca/ollama/issues/1097
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1112/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/112
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/112/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/112/comments
|
https://api.github.com/repos/ollama/ollama/issues/112/events
|
https://github.com/ollama/ollama/pull/112
| 1,811,022,006
|
PR_kwDOJ0Z1Ps5V18xY
| 112
|
resolve modelfile before passing to server
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-19T02:34:32
| 2023-07-19T02:36:27
| 2023-07-19T02:36:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/112",
"html_url": "https://github.com/ollama/ollama/pull/112",
"diff_url": "https://github.com/ollama/ollama/pull/112.diff",
"patch_url": "https://github.com/ollama/ollama/pull/112.patch",
"merged_at": "2023-07-19T02:36:25"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/112/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4095
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4095/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4095/comments
|
https://api.github.com/repos/ollama/ollama/issues/4095/events
|
https://github.com/ollama/ollama/issues/4095
| 2,274,734,176
|
I_kwDOJ0Z1Ps6HlbBg
| 4,095
|
Is there a problem with the document?
|
{
"login": "ggjk616",
"id": 168710680,
"node_id": "U_kgDOCg5SGA",
"avatar_url": "https://avatars.githubusercontent.com/u/168710680?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggjk616",
"html_url": "https://github.com/ggjk616",
"followers_url": "https://api.github.com/users/ggjk616/followers",
"following_url": "https://api.github.com/users/ggjk616/following{/other_user}",
"gists_url": "https://api.github.com/users/ggjk616/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggjk616/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggjk616/subscriptions",
"organizations_url": "https://api.github.com/users/ggjk616/orgs",
"repos_url": "https://api.github.com/users/ggjk616/repos",
"events_url": "https://api.github.com/users/ggjk616/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggjk616/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-05-02T06:37:54
| 2024-05-02T10:16:12
| 2024-05-02T10:16:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Can you help me,In the documentation, I noticed the following statement: “You can set OLLAMA_LLM_LIBRARY to any of the available LLM libraries to bypass autodetection, so for example, if you have a CUDA card, but want to force the CPU LLM library with AVX2 vector support, use:
OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve”
But After setting OLLAMA_LLM_LIBRARY=“cpu_avx2”, the program still detects my GPU when loading the model, resulting in an error: Error: Post “https://127.0.0.1:11434/api/chat”: read tcp 127.0.0.1:56915->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.
### OS
Windows
### GPU
AMD
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "ggjk616",
"id": 168710680,
"node_id": "U_kgDOCg5SGA",
"avatar_url": "https://avatars.githubusercontent.com/u/168710680?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggjk616",
"html_url": "https://github.com/ggjk616",
"followers_url": "https://api.github.com/users/ggjk616/followers",
"following_url": "https://api.github.com/users/ggjk616/following{/other_user}",
"gists_url": "https://api.github.com/users/ggjk616/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggjk616/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggjk616/subscriptions",
"organizations_url": "https://api.github.com/users/ggjk616/orgs",
"repos_url": "https://api.github.com/users/ggjk616/repos",
"events_url": "https://api.github.com/users/ggjk616/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggjk616/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4095/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4155
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4155/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4155/comments
|
https://api.github.com/repos/ollama/ollama/issues/4155/events
|
https://github.com/ollama/ollama/issues/4155
| 2,279,169,147
|
I_kwDOJ0Z1Ps6H2Vx7
| 4,155
|
Add option in the install scripts to auto set OLLAMA_HOST environment variable
|
{
"login": "centopw",
"id": 30675552,
"node_id": "MDQ6VXNlcjMwNjc1NTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/30675552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/centopw",
"html_url": "https://github.com/centopw",
"followers_url": "https://api.github.com/users/centopw/followers",
"following_url": "https://api.github.com/users/centopw/following{/other_user}",
"gists_url": "https://api.github.com/users/centopw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/centopw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/centopw/subscriptions",
"organizations_url": "https://api.github.com/users/centopw/orgs",
"repos_url": "https://api.github.com/users/centopw/repos",
"events_url": "https://api.github.com/users/centopw/events{/privacy}",
"received_events_url": "https://api.github.com/users/centopw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-04T19:37:42
| 2024-05-09T21:16:30
| 2024-05-09T21:16:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
In the installer scripts, add a option that ask if user want to allow other machine on the same network to connect. Base on this docs: [faq](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-mac)
I can create a PR if needed
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4155/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3030
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3030/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3030/comments
|
https://api.github.com/repos/ollama/ollama/issues/3030/events
|
https://github.com/ollama/ollama/pull/3030
| 2,177,439,108
|
PR_kwDOJ0Z1Ps5pJ9Nn
| 3,030
|
Update llama.cpp submodule to `77d1ac7`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-09T23:10:52
| 2024-03-09T23:55:35
| 2024-03-09T23:55:34
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3030",
"html_url": "https://github.com/ollama/ollama/pull/3030",
"diff_url": "https://github.com/ollama/ollama/pull/3030.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3030.patch",
"merged_at": "2024-03-09T23:55:34"
}
|
Note we use `-DLLAMA_METAL_EMBED_LIBRARY=on` on arm64 darwin to embed ggml-metal.metal. This change also required us to prepend ggml-common.h to the top of ggml-metal.metal to avoid a runtime lookup error.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3030/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1157
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1157/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1157/comments
|
https://api.github.com/repos/ollama/ollama/issues/1157/events
|
https://github.com/ollama/ollama/issues/1157
| 1,997,826,755
|
I_kwDOJ0Z1Ps53FGrD
| 1,157
|
[Linux] - Instructions for exposing Ollama doesn't work
|
{
"login": "SoloBSD",
"id": 17459633,
"node_id": "MDQ6VXNlcjE3NDU5NjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/17459633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoloBSD",
"html_url": "https://github.com/SoloBSD",
"followers_url": "https://api.github.com/users/SoloBSD/followers",
"following_url": "https://api.github.com/users/SoloBSD/following{/other_user}",
"gists_url": "https://api.github.com/users/SoloBSD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoloBSD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoloBSD/subscriptions",
"organizations_url": "https://api.github.com/users/SoloBSD/orgs",
"repos_url": "https://api.github.com/users/SoloBSD/repos",
"events_url": "https://api.github.com/users/SoloBSD/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoloBSD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2023-11-16T21:33:43
| 2023-11-17T01:09:35
| 2023-11-17T00:55:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Instructions for Linux on how to expose ollama doesn't work.
https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-expose-ollama-on-my-network
For some reason when Ollama gets installed on Linux it creates:
/etc/systemd/system/ollama.service
So it seems it never processes
/etc/systemd/system/ollama.service.d/environment.conf file
I tried to add:
Environment=OLLAMA_HOST=0.0.0.0:11434
under the [Service] section of /etc/systemd/system/ollama.service
but still doesn't take it.
There is already there an "Environment" statement which contains some paths. Didn't try to append the OLLAMA_HOST variable at the end, but created a new line down the existing one.
I only made it work issuing manually:
export OLLAMA_HOST=0.0.0.0:11434
|
{
"login": "SoloBSD",
"id": 17459633,
"node_id": "MDQ6VXNlcjE3NDU5NjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/17459633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoloBSD",
"html_url": "https://github.com/SoloBSD",
"followers_url": "https://api.github.com/users/SoloBSD/followers",
"following_url": "https://api.github.com/users/SoloBSD/following{/other_user}",
"gists_url": "https://api.github.com/users/SoloBSD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoloBSD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoloBSD/subscriptions",
"organizations_url": "https://api.github.com/users/SoloBSD/orgs",
"repos_url": "https://api.github.com/users/SoloBSD/repos",
"events_url": "https://api.github.com/users/SoloBSD/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoloBSD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1157/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1157/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2466
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2466/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2466/comments
|
https://api.github.com/repos/ollama/ollama/issues/2466/events
|
https://github.com/ollama/ollama/pull/2466
| 2,130,496,639
|
PR_kwDOJ0Z1Ps5mp7JN
| 2,466
|
Added NextJS web interface for Ollama models to readme.md
|
{
"login": "jakobhoeg",
"id": 114422072,
"node_id": "U_kgDOBtHxOA",
"avatar_url": "https://avatars.githubusercontent.com/u/114422072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakobhoeg",
"html_url": "https://github.com/jakobhoeg",
"followers_url": "https://api.github.com/users/jakobhoeg/followers",
"following_url": "https://api.github.com/users/jakobhoeg/following{/other_user}",
"gists_url": "https://api.github.com/users/jakobhoeg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakobhoeg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakobhoeg/subscriptions",
"organizations_url": "https://api.github.com/users/jakobhoeg/orgs",
"repos_url": "https://api.github.com/users/jakobhoeg/repos",
"events_url": "https://api.github.com/users/jakobhoeg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakobhoeg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-12T16:26:42
| 2024-02-20T02:57:36
| 2024-02-20T02:57:36
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2466",
"html_url": "https://github.com/ollama/ollama/pull/2466",
"diff_url": "https://github.com/ollama/ollama/pull/2466.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2466.patch",
"merged_at": "2024-02-20T02:57:36"
}
|
Added [nextjs-ollama-llm-ui](https://github.com/jakobhoeg/nextjs-ollama-llm-ui) to the readme file.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2466/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2466/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2659
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2659/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2659/comments
|
https://api.github.com/repos/ollama/ollama/issues/2659/events
|
https://github.com/ollama/ollama/issues/2659
| 2,148,066,317
|
I_kwDOJ0Z1Ps6ACOQN
| 2,659
|
Add phixtral
|
{
"login": "vprelovac",
"id": 4319401,
"node_id": "MDQ6VXNlcjQzMTk0MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4319401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vprelovac",
"html_url": "https://github.com/vprelovac",
"followers_url": "https://api.github.com/users/vprelovac/followers",
"following_url": "https://api.github.com/users/vprelovac/following{/other_user}",
"gists_url": "https://api.github.com/users/vprelovac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vprelovac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vprelovac/subscriptions",
"organizations_url": "https://api.github.com/users/vprelovac/orgs",
"repos_url": "https://api.github.com/users/vprelovac/repos",
"events_url": "https://api.github.com/users/vprelovac/events{/privacy}",
"received_events_url": "https://api.github.com/users/vprelovac/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-02-22T02:39:04
| 2024-03-12T02:02:47
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently the best 2B model
https://huggingface.co/shadowml/phixtral-4x2_8odd
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2659/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2659/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2850
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2850/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2850/comments
|
https://api.github.com/repos/ollama/ollama/issues/2850/events
|
https://github.com/ollama/ollama/issues/2850
| 2,162,455,308
|
I_kwDOJ0Z1Ps6A5HMM
| 2,850
|
`ollama push` and `ollama pull` are slow or hang on windows
|
{
"login": "ewebgh33",
"id": 123797054,
"node_id": "U_kgDOB2D-Pg",
"avatar_url": "https://avatars.githubusercontent.com/u/123797054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ewebgh33",
"html_url": "https://github.com/ewebgh33",
"followers_url": "https://api.github.com/users/ewebgh33/followers",
"following_url": "https://api.github.com/users/ewebgh33/following{/other_user}",
"gists_url": "https://api.github.com/users/ewebgh33/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ewebgh33/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ewebgh33/subscriptions",
"organizations_url": "https://api.github.com/users/ewebgh33/orgs",
"repos_url": "https://api.github.com/users/ewebgh33/repos",
"events_url": "https://api.github.com/users/ewebgh33/events{/privacy}",
"received_events_url": "https://api.github.com/users/ewebgh33/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-03-01T02:21:35
| 2024-08-06T18:10:32
| 2024-08-06T18:10:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can't download ANY models.
What is happening? Not my internet, speed test blasts.
Your servers OK?
Happening on Windows version buggy still? Using latest, 0.1.27 (Win11).
As per docs, set Windows environment variable to:
OLLAMA_MODELS = D:\AI\text\ollama-models
I am familiar with environment variables and this worked with llama2 a few days ago.
Now in Powershell
`ollama pull phind-codellama`
Says will take 99hrs, has downloaded 82kb
Then quits DL
`Error: context canceled`
Just tried codellama:70b, same thing. 99hrs, cancels with error.
This is why people ask, "why can't we just use a GGUF" or AWQ or whatever. There are multiple sources that host the models, but we need these hashed files and blobs that only Ollama has. Centralised models, point of failure, case in point this ticket.
Some models I already have as I run them in Oobabooga. Is it too much to ask I could use these already downloaded (and large) models? And not need two copies of the same thing, albeit in different formats.
Rebooted - no change. Can't download ANY models.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2850/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5474
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5474/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5474/comments
|
https://api.github.com/repos/ollama/ollama/issues/5474/events
|
https://github.com/ollama/ollama/issues/5474
| 2,389,647,879
|
I_kwDOJ0Z1Ps6ObyIH
| 5,474
|
InternLM2.5 - hallucinations - lot of repetitions etc
|
{
"login": "Qualzz",
"id": 35169816,
"node_id": "MDQ6VXNlcjM1MTY5ODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/35169816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qualzz",
"html_url": "https://github.com/Qualzz",
"followers_url": "https://api.github.com/users/Qualzz/followers",
"following_url": "https://api.github.com/users/Qualzz/following{/other_user}",
"gists_url": "https://api.github.com/users/Qualzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qualzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qualzz/subscriptions",
"organizations_url": "https://api.github.com/users/Qualzz/orgs",
"repos_url": "https://api.github.com/users/Qualzz/repos",
"events_url": "https://api.github.com/users/Qualzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qualzz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-07-03T23:18:06
| 2024-10-04T17:07:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Seems like something is wrong with InternLM2.5, I can't get any meaningful out of it. (tried with 32k context)
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
v0.1.48
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5474/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/5474/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1169
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1169/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1169/comments
|
https://api.github.com/repos/ollama/ollama/issues/1169/events
|
https://github.com/ollama/ollama/issues/1169
| 1,998,422,439
|
I_kwDOJ0Z1Ps53HYGn
| 1,169
|
Update the model name in the api doc
|
{
"login": "shenli",
"id": 1192573,
"node_id": "MDQ6VXNlcjExOTI1NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1192573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shenli",
"html_url": "https://github.com/shenli",
"followers_url": "https://api.github.com/users/shenli/followers",
"following_url": "https://api.github.com/users/shenli/following{/other_user}",
"gists_url": "https://api.github.com/users/shenli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shenli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shenli/subscriptions",
"organizations_url": "https://api.github.com/users/shenli/orgs",
"repos_url": "https://api.github.com/users/shenli/repos",
"events_url": "https://api.github.com/users/shenli/events{/privacy}",
"received_events_url": "https://api.github.com/users/shenli/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-17T07:10:44
| 2023-11-17T12:18:09
| 2023-11-17T12:18:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, I am new to Ollama.
I followed the [Quickstart](https://github.com/jmorganca/ollama/tree/main#quickstart) to try Ollama with model Llama2. It is very easy to run and a very interesting project.
When I explored further in the [API doc](https://github.com/jmorganca/ollama/blob/main/docs/api.md), I found that the model names are not consistent. There are `llama2` and `llama2:7b` in the API doc. I met the issue like `{"error":"model 'llama2:7b' not found, try pulling it first"}% `. From the [model library section](https://github.com/jmorganca/ollama/tree/main#model-library), I think `llama2` is the same as `llama2:7b`. But from [here](https://ollama.ai/library/llama2) I notice that `llama2` is actually `llam2:latest`.
For the [API doc](https://github.com/jmorganca/ollama/blob/main/docs/api.md), do you think it is better to change all the `llama2:7b` to `llama2`? It is good for the people who are following the Quickstart.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1169/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2517
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2517/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2517/comments
|
https://api.github.com/repos/ollama/ollama/issues/2517/events
|
https://github.com/ollama/ollama/issues/2517
| 2,137,087,723
|
I_kwDOJ0Z1Ps5_YV7r
| 2,517
|
parser/parser.go:9:2: package log/slog is not in GOROOT (/usr/local/go120/src/log/slog)
|
{
"login": "yurivict",
"id": 271906,
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yurivict",
"html_url": "https://github.com/yurivict",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yurivict/subscriptions",
"organizations_url": "https://api.github.com/users/yurivict/orgs",
"repos_url": "https://api.github.com/users/yurivict/repos",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"received_events_url": "https://api.github.com/users/yurivict/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-15T17:36:41
| 2024-02-15T19:51:08
| 2024-02-15T19:51:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Build fails:
```
===> Building for ollama-0.1.25
(cd /usr/ports/misc/ollama/work/github.com/ollama/ollama@v0.1.25; for t in ./cmd; do out=$(/usr/bin/basename $(echo ${t} | /usr/bin/sed -Ee 's/^[^:]*:([^:]+).*$/\1/' -e 's/^\.$/ollama/')); pkg=$(echo ${t} | /usr/bin/sed -Ee 's/^([^:]*).*$/\1/' -e 's/^ollama$/./'); echo "===> Building ${out} from ${pkg}"; /usr/bin/env XDG_DATA_HOME=/usr/ports/misc/ollama/work XDG_CONFIG_HOME=/usr/ports/misc/ollama/work XDG_CACHE_HOME=/usr/ports/misc/ollama/work/.cache HOME=/usr/ports/misc/ollama/work PATH=/usr/local/libexec/ccache:/usr/ports/misc/ollama/work/.bin:/home/yuri/.cargo/bin:/home/yuri/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin PKG_CONFIG_LIBDIR=/usr/ports/misc/ollama/work/.pkgconfig:/usr/local/libdata/pkgconfig:/usr/local/share/pkgconfig:/usr/libdata/pkgconfig MK_DEBUG_FILES=no MK_KERNEL_SYMBOLS=no SHELL=/bin/sh NO_LINT=YES PREFIX=/usr/local LOCALBASE=/usr/local CC="cc" CFLAGS="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " CPP="cpp" CPPFLAGS="" LDFLAGS=" -fstack-protector-strong " LIBS="" CXX="c++" CXXFLAGS="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " CCACHE_DIR="/tmp/.ccache" BSD_INSTALL_PROGRAM="install -s -m 555" BSD_INSTALL_LIB="install -s -m 0644" BSD_INSTALL_SCRIPT="install -m 555" BSD_INSTALL_DATA="install -m 0644" BSD_INSTALL_MAN="install -m 444" CGO_ENABLED=1 CGO_CFLAGS="-I/usr/local/include" CGO_LDFLAGS="-L/usr/local/lib" GOAMD64= GOARM= GOTMPDIR="/usr/ports/misc/ollama/work" GOPATH="/usr/ports/distfiles/go/misc_ollama" GOBIN="/usr/ports/misc/ollama/work/bin" GO111MODULE=on GOFLAGS=-modcacherw GOSUMDB=sum.golang.org GOMAXPROCS=7 GOPROXY=off /usr/local/bin/go120 build -buildmode=exe -v -trimpath -ldflags=-s -buildvcs=false -mod=vendor -o /usr/ports/misc/ollama/work/bin/${out} ${pkg}; done)
===> Building cmd from ./cmd
package github.com/jmorganca/ollama/cmd
imports github.com/jmorganca/ollama/server
imports github.com/jmorganca/ollama/gpu: C source files not allowed when not using cgo or SWIG: gpu_info_cpu.c gpu_info_cuda.c gpu_info_rocm.c
parser/parser.go:9:2: package log/slog is not in GOROOT (/usr/local/go120/src/log/slog)
note: imported by a module that requires go 1.21
parser/parser.go:10:2: package slices is not in GOROOT (/usr/local/go120/src/slices)
note: imported by a module that requires go 1.21
*** Error code 1
```
Aren't all Go dependencies supposed to be fetched from Golang servers? Virtually all other Go projects require no dependencies other than the ones downloaded from Golang servers.
I build in the FreeBSD ports framework in an attempt to create the port.
Version: 0.1.25
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2517/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7287
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7287/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7287/comments
|
https://api.github.com/repos/ollama/ollama/issues/7287/events
|
https://github.com/ollama/ollama/issues/7287
| 2,601,723,323
|
I_kwDOJ0Z1Ps6bEyW7
| 7,287
|
Version v0.3.14 impacted CPU inference performance
|
{
"login": "closesim",
"id": 9018799,
"node_id": "MDQ6VXNlcjkwMTg3OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9018799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/closesim",
"html_url": "https://github.com/closesim",
"followers_url": "https://api.github.com/users/closesim/followers",
"following_url": "https://api.github.com/users/closesim/following{/other_user}",
"gists_url": "https://api.github.com/users/closesim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/closesim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/closesim/subscriptions",
"organizations_url": "https://api.github.com/users/closesim/orgs",
"repos_url": "https://api.github.com/users/closesim/repos",
"events_url": "https://api.github.com/users/closesim/events{/privacy}",
"received_events_url": "https://api.github.com/users/closesim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 9
| 2024-10-21T08:10:06
| 2024-10-30T22:05:47
| 2024-10-30T22:05:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi, I just updated my docker container where I run my small models to the latest version, as I use to every 15 days or so. I'm using a Quad Core CPU (no GPU) and with this new version I noticed that LLama 3.1 8b performance was very slow. I Initially thought it was a hardware issue, like overheating, but after checking htop, I see that Ollama was using 2 threads out of 8 (2 less threads than normal), which means 2 cores out of 4. After setting manually the number of threads for the model, the performance improved as it was before.
I see in the changelog that the thread behavior has changed, so I don't know if this is intended or if it's a bug. Is there a environment variable for setting this in the mean time instead of telling every model to use 4 threads manually?
For context:
- Main machine is Windows
- The Linux OS (Ubuntu) with docker runs on Hyper-V with 8 "CPUs" allocated
- I use Open WebUI to interact with the models
- Ollama used to use 4 threads.

### OS
Docker
### GPU
_No response_
### CPU
AMD
### Ollama version
v0.3.14
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7287/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1088
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1088/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1088/comments
|
https://api.github.com/repos/ollama/ollama/issues/1088/events
|
https://github.com/ollama/ollama/issues/1088
| 1,989,023,621
|
I_kwDOJ0Z1Ps52jheF
| 1,088
|
Problems installing the docker image.
|
{
"login": "pdavis68",
"id": 2781885,
"node_id": "MDQ6VXNlcjI3ODE4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2781885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdavis68",
"html_url": "https://github.com/pdavis68",
"followers_url": "https://api.github.com/users/pdavis68/followers",
"following_url": "https://api.github.com/users/pdavis68/following{/other_user}",
"gists_url": "https://api.github.com/users/pdavis68/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdavis68/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdavis68/subscriptions",
"organizations_url": "https://api.github.com/users/pdavis68/orgs",
"repos_url": "https://api.github.com/users/pdavis68/repos",
"events_url": "https://api.github.com/users/pdavis68/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdavis68/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-11T16:17:04
| 2023-11-11T16:19:32
| 2023-11-11T16:19:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "pdavis68",
"id": 2781885,
"node_id": "MDQ6VXNlcjI3ODE4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2781885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdavis68",
"html_url": "https://github.com/pdavis68",
"followers_url": "https://api.github.com/users/pdavis68/followers",
"following_url": "https://api.github.com/users/pdavis68/following{/other_user}",
"gists_url": "https://api.github.com/users/pdavis68/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdavis68/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdavis68/subscriptions",
"organizations_url": "https://api.github.com/users/pdavis68/orgs",
"repos_url": "https://api.github.com/users/pdavis68/repos",
"events_url": "https://api.github.com/users/pdavis68/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdavis68/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1088/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3262
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3262/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3262/comments
|
https://api.github.com/repos/ollama/ollama/issues/3262/events
|
https://github.com/ollama/ollama/issues/3262
| 2,196,585,763
|
I_kwDOJ0Z1Ps6C7T0j
| 3,262
|
Ollama can support windows 7?
|
{
"login": "zhaosd",
"id": 5444416,
"node_id": "MDQ6VXNlcjU0NDQ0MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5444416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaosd",
"html_url": "https://github.com/zhaosd",
"followers_url": "https://api.github.com/users/zhaosd/followers",
"following_url": "https://api.github.com/users/zhaosd/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaosd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaosd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaosd/subscriptions",
"organizations_url": "https://api.github.com/users/zhaosd/orgs",
"repos_url": "https://api.github.com/users/zhaosd/repos",
"events_url": "https://api.github.com/users/zhaosd/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaosd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-03-20T03:12:53
| 2024-11-18T23:47:12
| 2024-03-20T07:41:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3262/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7124
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7124/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7124/comments
|
https://api.github.com/repos/ollama/ollama/issues/7124/events
|
https://github.com/ollama/ollama/pull/7124
| 2,571,787,113
|
PR_kwDOJ0Z1Ps594dhh
| 7,124
|
llama: Decouple patching script from submodule
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-08T00:29:54
| 2024-10-08T16:21:35
| 2024-10-08T15:54:00
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7124",
"html_url": "https://github.com/ollama/ollama/pull/7124",
"diff_url": "https://github.com/ollama/ollama/pull/7124.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7124.patch",
"merged_at": null
}
|
Replaced by #7139 on main
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7124/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/862
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/862/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/862/comments
|
https://api.github.com/repos/ollama/ollama/issues/862/events
|
https://github.com/ollama/ollama/pull/862
| 1,955,129,810
|
PR_kwDOJ0Z1Ps5dbU1e
| 862
|
fix/Predict: A prediction should use the options sent with the request
|
{
"login": "CyrilPeponnet",
"id": 2277387,
"node_id": "MDQ6VXNlcjIyNzczODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2277387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CyrilPeponnet",
"html_url": "https://github.com/CyrilPeponnet",
"followers_url": "https://api.github.com/users/CyrilPeponnet/followers",
"following_url": "https://api.github.com/users/CyrilPeponnet/following{/other_user}",
"gists_url": "https://api.github.com/users/CyrilPeponnet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CyrilPeponnet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CyrilPeponnet/subscriptions",
"organizations_url": "https://api.github.com/users/CyrilPeponnet/orgs",
"repos_url": "https://api.github.com/users/CyrilPeponnet/repos",
"events_url": "https://api.github.com/users/CyrilPeponnet/events{/privacy}",
"received_events_url": "https://api.github.com/users/CyrilPeponnet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-20T23:26:14
| 2023-10-26T15:07:42
| 2023-10-26T15:07:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/862",
"html_url": "https://github.com/ollama/ollama/pull/862",
"diff_url": "https://github.com/ollama/ollama/pull/862.diff",
"patch_url": "https://github.com/ollama/ollama/pull/862.patch",
"merged_at": null
}
|
Consecutive query to the same running model should use the client request parameters instead of the one set during the model loading.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/862/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6799
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6799/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6799/comments
|
https://api.github.com/repos/ollama/ollama/issues/6799/events
|
https://github.com/ollama/ollama/issues/6799
| 2,526,152,209
|
I_kwDOJ0Z1Ps6WkgYR
| 6,799
|
Is it possible to configure ollama deployed in docker?
|
{
"login": "wizounovziki",
"id": 42036658,
"node_id": "MDQ6VXNlcjQyMDM2NjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/42036658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wizounovziki",
"html_url": "https://github.com/wizounovziki",
"followers_url": "https://api.github.com/users/wizounovziki/followers",
"following_url": "https://api.github.com/users/wizounovziki/following{/other_user}",
"gists_url": "https://api.github.com/users/wizounovziki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wizounovziki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wizounovziki/subscriptions",
"organizations_url": "https://api.github.com/users/wizounovziki/orgs",
"repos_url": "https://api.github.com/users/wizounovziki/repos",
"events_url": "https://api.github.com/users/wizounovziki/events{/privacy}",
"received_events_url": "https://api.github.com/users/wizounovziki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-14T09:29:47
| 2024-09-25T21:23:15
| 2024-09-25T21:23:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I pull docker image from dockerhub and launched a few models and then found the num of user requests was limited.
In the documentation it shows that this could be solved by set up OLLAMA_NUM_PARALLEL by systemctl commands.
How can I do this since systemctl is not included in the docker container?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6799/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1499
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1499/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1499/comments
|
https://api.github.com/repos/ollama/ollama/issues/1499/events
|
https://github.com/ollama/ollama/issues/1499
| 2,039,445,848
|
I_kwDOJ0Z1Ps55j3lY
| 1,499
|
Add mistral's new 7B-instruct-v0.2
|
{
"login": "tarek-ayed",
"id": 45576986,
"node_id": "MDQ6VXNlcjQ1NTc2OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/45576986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tarek-ayed",
"html_url": "https://github.com/tarek-ayed",
"followers_url": "https://api.github.com/users/tarek-ayed/followers",
"following_url": "https://api.github.com/users/tarek-ayed/following{/other_user}",
"gists_url": "https://api.github.com/users/tarek-ayed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tarek-ayed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tarek-ayed/subscriptions",
"organizations_url": "https://api.github.com/users/tarek-ayed/orgs",
"repos_url": "https://api.github.com/users/tarek-ayed/repos",
"events_url": "https://api.github.com/users/tarek-ayed/events{/privacy}",
"received_events_url": "https://api.github.com/users/tarek-ayed/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-13T10:51:12
| 2023-12-14T03:11:15
| 2023-12-13T16:17:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Along with many releases, Mistral vastly improved their existing 7B model with a version named `v0.2`.
It has 32k context instead of 8k and better benchmark scores: https://x.com/dchaplot/status/1734198245067243629?s=20
More can be found here: https://docs.mistral.ai/platform/endpoints (see "Mistral Tiny")
The weights are published on HuggingFace:
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
I don't know if anything needs to be implemented on the llama.cpp front, given that it's the same architecture as before.
Let me know how I can contribute to make this happen ;)
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1499/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5399
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5399/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5399/comments
|
https://api.github.com/repos/ollama/ollama/issues/5399/events
|
https://github.com/ollama/ollama/issues/5399
| 2,382,949,738
|
I_kwDOJ0Z1Ps6OCO1q
| 5,399
|
Please support models of rerank type
|
{
"login": "yushengliao",
"id": 29765903,
"node_id": "MDQ6VXNlcjI5NzY1OTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/29765903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yushengliao",
"html_url": "https://github.com/yushengliao",
"followers_url": "https://api.github.com/users/yushengliao/followers",
"following_url": "https://api.github.com/users/yushengliao/following{/other_user}",
"gists_url": "https://api.github.com/users/yushengliao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yushengliao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yushengliao/subscriptions",
"organizations_url": "https://api.github.com/users/yushengliao/orgs",
"repos_url": "https://api.github.com/users/yushengliao/repos",
"events_url": "https://api.github.com/users/yushengliao/events{/privacy}",
"received_events_url": "https://api.github.com/users/yushengliao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-07-01T06:23:13
| 2024-09-02T20:51:51
| 2024-09-02T20:51:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
There are so many users for the Ollama project, why hasn't it been so long to support Renanker
Similar software such as Localai、xinreference already supports rerank
https://localai.io/features/reranker/
https://inference.readthedocs.io/en/latest/models/builtin/rerank/index.html

|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5399/reactions",
"total_count": 10,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5399/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6952
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6952/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6952/comments
|
https://api.github.com/repos/ollama/ollama/issues/6952/events
|
https://github.com/ollama/ollama/issues/6952
| 2,547,557,710
|
I_kwDOJ0Z1Ps6X2KVO
| 6,952
|
codegeex4-----Error: pull model manifest
|
{
"login": "zylGit-lte",
"id": 181957291,
"node_id": "U_kgDOCthyqw",
"avatar_url": "https://avatars.githubusercontent.com/u/181957291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zylGit-lte",
"html_url": "https://github.com/zylGit-lte",
"followers_url": "https://api.github.com/users/zylGit-lte/followers",
"following_url": "https://api.github.com/users/zylGit-lte/following{/other_user}",
"gists_url": "https://api.github.com/users/zylGit-lte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zylGit-lte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zylGit-lte/subscriptions",
"organizations_url": "https://api.github.com/users/zylGit-lte/orgs",
"repos_url": "https://api.github.com/users/zylGit-lte/repos",
"events_url": "https://api.github.com/users/zylGit-lte/events{/privacy}",
"received_events_url": "https://api.github.com/users/zylGit-lte/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-25T10:09:54
| 2024-09-25T11:22:21
| 2024-09-25T11:21:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when i run command "ollama run codegeex4", then print out below log,how can i slove this problem?
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/codegeex4/manifests/latest": dial tcp 172.67.182.229:443: i/o timeout
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.3.11
|
{
"login": "zylGit-lte",
"id": 181957291,
"node_id": "U_kgDOCthyqw",
"avatar_url": "https://avatars.githubusercontent.com/u/181957291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zylGit-lte",
"html_url": "https://github.com/zylGit-lte",
"followers_url": "https://api.github.com/users/zylGit-lte/followers",
"following_url": "https://api.github.com/users/zylGit-lte/following{/other_user}",
"gists_url": "https://api.github.com/users/zylGit-lte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zylGit-lte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zylGit-lte/subscriptions",
"organizations_url": "https://api.github.com/users/zylGit-lte/orgs",
"repos_url": "https://api.github.com/users/zylGit-lte/repos",
"events_url": "https://api.github.com/users/zylGit-lte/events{/privacy}",
"received_events_url": "https://api.github.com/users/zylGit-lte/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6952/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1709
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1709/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1709/comments
|
https://api.github.com/repos/ollama/ollama/issues/1709/events
|
https://github.com/ollama/ollama/issues/1709
| 2,055,700,932
|
I_kwDOJ0Z1Ps56h4HE
| 1,709
|
Is there any plan to support FinGPT
|
{
"login": "waqasakram117",
"id": 13805372,
"node_id": "MDQ6VXNlcjEzODA1Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/13805372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/waqasakram117",
"html_url": "https://github.com/waqasakram117",
"followers_url": "https://api.github.com/users/waqasakram117/followers",
"following_url": "https://api.github.com/users/waqasakram117/following{/other_user}",
"gists_url": "https://api.github.com/users/waqasakram117/gists{/gist_id}",
"starred_url": "https://api.github.com/users/waqasakram117/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/waqasakram117/subscriptions",
"organizations_url": "https://api.github.com/users/waqasakram117/orgs",
"repos_url": "https://api.github.com/users/waqasakram117/repos",
"events_url": "https://api.github.com/users/waqasakram117/events{/privacy}",
"received_events_url": "https://api.github.com/users/waqasakram117/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 5
| 2023-12-25T13:55:01
| 2024-07-03T21:18:02
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey Team, Is there any plan to support [FinGPT](https://github.com/AI4Finance-Foundation/FinGPT) anytime soon?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1709/reactions",
"total_count": 22,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 12
}
|
https://api.github.com/repos/ollama/ollama/issues/1709/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6481
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6481/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6481/comments
|
https://api.github.com/repos/ollama/ollama/issues/6481/events
|
https://github.com/ollama/ollama/issues/6481
| 2,483,787,305
|
I_kwDOJ0Z1Ps6UC5Yp
| 6,481
|
gork2
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-23T19:46:43
| 2024-08-24T04:31:25
| 2024-08-23T20:35:17
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
New gork2 has already published can we have support for that thanks.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6481/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3844
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3844/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3844/comments
|
https://api.github.com/repos/ollama/ollama/issues/3844/events
|
https://github.com/ollama/ollama/issues/3844
| 2,258,843,905
|
I_kwDOJ0Z1Ps6GozkB
| 3,844
|
api error occurred after some times request
|
{
"login": "Shiyaoa",
"id": 48488459,
"node_id": "MDQ6VXNlcjQ4NDg4NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/48488459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shiyaoa",
"html_url": "https://github.com/Shiyaoa",
"followers_url": "https://api.github.com/users/Shiyaoa/followers",
"following_url": "https://api.github.com/users/Shiyaoa/following{/other_user}",
"gists_url": "https://api.github.com/users/Shiyaoa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shiyaoa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shiyaoa/subscriptions",
"organizations_url": "https://api.github.com/users/Shiyaoa/orgs",
"repos_url": "https://api.github.com/users/Shiyaoa/repos",
"events_url": "https://api.github.com/users/Shiyaoa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shiyaoa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-04-23T13:09:20
| 2025-01-06T03:55:27
| 2024-04-25T11:50:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
i try to post request using the url http://localhost:11434/v1 and model "llama3:8b-instruct-q8_0", it works successfully at the initially first time, but then failed with these information:
Error occurred: Error code: 400 - {'error': {'message': 'unexpected server status: 1', 'type': 'api_error', 'param': None, 'code': None}}
then i use model "wizardlm2:7b-q8_0", the same error occurred after 2418 requests.
28%|██▊ | 2418/8569 [4:53:46<12:27:18, 7.29s/it]
Error occurred: Error code: 400 - {'error': {'message': 'unexpected server status: 1', 'type': 'api_error', 'param': None, 'code': None}}
i have checked the logs, but i can't solve it.
[GIN] 2024/04/23 - 04:33:07 | 400 | 56.4274ms | 127.0.0.1 | POST "/v1/chat/completions"
time=2024-04-23T04:33:07.535+08:00 level=ERROR source=prompt.go:86 msg="failed to encode prompt" err="unexpected server status: 1"
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "Shiyaoa",
"id": 48488459,
"node_id": "MDQ6VXNlcjQ4NDg4NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/48488459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shiyaoa",
"html_url": "https://github.com/Shiyaoa",
"followers_url": "https://api.github.com/users/Shiyaoa/followers",
"following_url": "https://api.github.com/users/Shiyaoa/following{/other_user}",
"gists_url": "https://api.github.com/users/Shiyaoa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shiyaoa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shiyaoa/subscriptions",
"organizations_url": "https://api.github.com/users/Shiyaoa/orgs",
"repos_url": "https://api.github.com/users/Shiyaoa/repos",
"events_url": "https://api.github.com/users/Shiyaoa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shiyaoa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3844/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3844/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4275
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4275/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4275/comments
|
https://api.github.com/repos/ollama/ollama/issues/4275/events
|
https://github.com/ollama/ollama/issues/4275
| 2,286,923,121
|
I_kwDOJ0Z1Ps6IT61x
| 4,275
|
Degraded accuracy when using the nomic-embed-text (v1.5) model with Ollama versions 0.1.32 and 0.1.33
|
{
"login": "Ganesh1030",
"id": 48667223,
"node_id": "MDQ6VXNlcjQ4NjY3MjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/48667223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ganesh1030",
"html_url": "https://github.com/Ganesh1030",
"followers_url": "https://api.github.com/users/Ganesh1030/followers",
"following_url": "https://api.github.com/users/Ganesh1030/following{/other_user}",
"gists_url": "https://api.github.com/users/Ganesh1030/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ganesh1030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ganesh1030/subscriptions",
"organizations_url": "https://api.github.com/users/Ganesh1030/orgs",
"repos_url": "https://api.github.com/users/Ganesh1030/repos",
"events_url": "https://api.github.com/users/Ganesh1030/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ganesh1030/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-09T05:29:55
| 2024-06-26T05:47:57
| 2024-06-25T16:46:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
We have an application where we are training the CSV file and using the following things for it:.
- 'nomic-embed-text(v1.5) model
- chromadb
- ollama(0.1.31)
At runtime, we are using'similarity_search' and getting good accuracy with ollama version 0.1.31, but when we upgrade ollama version to 0.1.32 or 0.1.33, accuracy is downgraded.
Can anyone please confirm it's an know issue or bug or we are missing something?
### OS
macOS
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.1.32 and 0.1.33
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4275/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/633
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/633/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/633/comments
|
https://api.github.com/repos/ollama/ollama/issues/633/events
|
https://github.com/ollama/ollama/pull/633
| 1,917,657,126
|
PR_kwDOJ0Z1Ps5bc6m4
| 633
|
do not download updates multiple times
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-28T14:20:30
| 2023-09-28T19:29:18
| 2023-09-28T19:29:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/633",
"html_url": "https://github.com/ollama/ollama/pull/633",
"diff_url": "https://github.com/ollama/ollama/pull/633.diff",
"patch_url": "https://github.com/ollama/ollama/pull/633.patch",
"merged_at": "2023-09-28T19:29:18"
}
|
We've hit a bug in the Electron auto-updater that prevents the toolbar app from restarting after update when `autoUpdater.checkForUpdates()` is called more than once. The root cause of this is not clear, it may be related to [this Electron issue](https://github.com/electron-userland/electron-builder/issues/7800). In any case we shouldnt be downloading the update multiple times, so prevent checking for updates once we know one is available.
Also log Electron app errors to our server.log file so we can actually diagnose these issues in the wild.
resolves #587
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/633/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4395
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4395/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4395/comments
|
https://api.github.com/repos/ollama/ollama/issues/4395/events
|
https://github.com/ollama/ollama/issues/4395
| 2,292,283,708
|
I_kwDOJ0Z1Ps6IoXk8
| 4,395
|
Cannot Use GPU properly
|
{
"login": "applepieiris",
"id": 36785462,
"node_id": "MDQ6VXNlcjM2Nzg1NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/36785462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/applepieiris",
"html_url": "https://github.com/applepieiris",
"followers_url": "https://api.github.com/users/applepieiris/followers",
"following_url": "https://api.github.com/users/applepieiris/following{/other_user}",
"gists_url": "https://api.github.com/users/applepieiris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/applepieiris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/applepieiris/subscriptions",
"organizations_url": "https://api.github.com/users/applepieiris/orgs",
"repos_url": "https://api.github.com/users/applepieiris/repos",
"events_url": "https://api.github.com/users/applepieiris/events{/privacy}",
"received_events_url": "https://api.github.com/users/applepieiris/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-05-13T09:25:56
| 2024-06-02T00:29:42
| 2024-06-02T00:29:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I installed the Ollama in my linux server according to the official documents:
`curl -fsSL https://ollama.com/install.sh | sh`
Installation is ok and it returns:
`
>>> Downloading ollama...
######################################################################## 100.0%-#O#- # #
>>> Installing ollama to /usr/local/bin...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> NVIDIA GPU installed.`
But when I `ollama run llama2`, when the model file downloaded already. The GPU shows no running process:
```
ubuntu@:~$ sudo nvidia-smi
Mon May 13 09:15:28 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA A100 80GB PCIe Off | 00000000:03:00.0 Off | On |
| N/A 29C P0 41W / 300W | 0MiB / 81920MiB | N/A Default |
| | | Enabled |
+-----------------------------------------+------------------------+----------------------+
```
But when I checked the CPU usages:
```
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1701363 ollama 20 0 20.0g 19.1g 18.1g R 840.0 10.1 9:51.51 /tmp/ollama872259507/runners/cpu_avx2/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-949974ebf5978d3d2e232+
1554 root 20 0 1236380 10880 8320 S 6.7 0.0 3:48.73 /usr/bin/containerd-shim-runc-v2 -namespace moby -id d2abaf7e2a6553dc1eae353c2e5eda9138ee8b2b925d1fdaae2ab97518a6996a -address /run/c+
1704361 ubuntu 20 0 11080 4736 3712 R 6.7 0.0 0:00.01 top -bn 1 -i -c
```
From the above, we can see that the ollama is running on CPU!!
I check the logs of ollama it shows me:

Is there any solutions to this?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.37
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4395/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8654
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8654/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8654/comments
|
https://api.github.com/repos/ollama/ollama/issues/8654/events
|
https://github.com/ollama/ollama/issues/8654
| 2,817,986,286
|
I_kwDOJ0Z1Ps6n9w7u
| 8,654
|
Available memory check should be disabled when mmap is in use
|
{
"login": "outis151",
"id": 11805613,
"node_id": "MDQ6VXNlcjExODA1NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/11805613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/outis151",
"html_url": "https://github.com/outis151",
"followers_url": "https://api.github.com/users/outis151/followers",
"following_url": "https://api.github.com/users/outis151/following{/other_user}",
"gists_url": "https://api.github.com/users/outis151/gists{/gist_id}",
"starred_url": "https://api.github.com/users/outis151/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/outis151/subscriptions",
"organizations_url": "https://api.github.com/users/outis151/orgs",
"repos_url": "https://api.github.com/users/outis151/repos",
"events_url": "https://api.github.com/users/outis151/events{/privacy}",
"received_events_url": "https://api.github.com/users/outis151/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-29T11:48:38
| 2025-01-29T13:07:03
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
With mmap enabled, a model does not need to fit in the system RAM. Therefore the associated check should be disabled in this case.
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8654/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8654/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3396
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3396/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3396/comments
|
https://api.github.com/repos/ollama/ollama/issues/3396/events
|
https://github.com/ollama/ollama/issues/3396
| 2,214,189,195
|
I_kwDOJ0Z1Ps6D-diL
| 3,396
|
exec format error when Running Ollama Container on AMD64 Architecture
|
{
"login": "joshyorko",
"id": 54248591,
"node_id": "MDQ6VXNlcjU0MjQ4NTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/54248591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyorko",
"html_url": "https://github.com/joshyorko",
"followers_url": "https://api.github.com/users/joshyorko/followers",
"following_url": "https://api.github.com/users/joshyorko/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyorko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyorko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyorko/subscriptions",
"organizations_url": "https://api.github.com/users/joshyorko/orgs",
"repos_url": "https://api.github.com/users/joshyorko/repos",
"events_url": "https://api.github.com/users/joshyorko/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyorko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-28T21:26:11
| 2024-03-29T00:11:02
| 2024-03-29T00:11:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When attempting to run the Ollama container, I continuously encounter an exec format error. The container fails to start properly and keeps trying to restart, logging the same error multiple times.
What did you expect to happen? I expected the Ollama container to start successfully without any architecture compatibility issues.
### What did you expect to see?
Expected Behavior: A successful initiation of the Ollama container, ready to receive and execute commands.
### Steps to reproduce
### Steps to Reproduce
1. **Environment Setup:** Ensure Docker is installed on a system running Ubuntu 22.04.3 LTS with AMD64 architecture.
2. **Attempt with Latest Image:**
- Run the Ollama Docker container using the `latest` tag: `docker run ollama/ollama:latest`.
- Observe the `exec format error` in the Docker logs, indicating the container is failing to start properly and continuously restarting.
3. **Successful Attempt with Previous Release:**
- Switch to using the Ollama Docker container with the `0.1.30` tag: `docker run ollama/ollama:0.1.30`.
- Notice that the container starts successfully without any architecture compatibility issues.
### Are there any recent changes that introduced the issue?
The issue arose after an automated update process on 3/28, handled by Watchtower, which pulled the latest release of the Ollama container. Prior to this update, the container was running smoothly. However, the latest image seems to be missing a build for the linux/amd64 architecture, which is necessary for compatibility with my system. As a result, running the latest image leads to an exec format error and causes the container to restart continuously. This indicates that the latest release may have inadvertently omitted the linux/amd64 build, which is critical for running on an AMD64 architecture system like mine.
### OS
Linux
### Architecture
arm64
### Platform
Docker
### Ollama version
0.1.30
### GPU
_No response_
### GPU info
na
### CPU
_No response_
### Other software
na
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3396/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7553
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7553/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7553/comments
|
https://api.github.com/repos/ollama/ollama/issues/7553/events
|
https://github.com/ollama/ollama/issues/7553
| 2,640,556,716
|
I_kwDOJ0Z1Ps6dY7Ks
| 7,553
|
Unable to load images from network fileshares on Windows
|
{
"login": "Antsiscool",
"id": 4112838,
"node_id": "MDQ6VXNlcjQxMTI4Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4112838?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Antsiscool",
"html_url": "https://github.com/Antsiscool",
"followers_url": "https://api.github.com/users/Antsiscool/followers",
"following_url": "https://api.github.com/users/Antsiscool/following{/other_user}",
"gists_url": "https://api.github.com/users/Antsiscool/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Antsiscool/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Antsiscool/subscriptions",
"organizations_url": "https://api.github.com/users/Antsiscool/orgs",
"repos_url": "https://api.github.com/users/Antsiscool/repos",
"events_url": "https://api.github.com/users/Antsiscool/events{/privacy}",
"received_events_url": "https://api.github.com/users/Antsiscool/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-11-07T10:24:54
| 2024-11-17T19:50:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Using Ollama on Windows via the terminal, if you ask a question and reference an image on a network fileshare, it will give a response about it not been able to see the photo. If you copy the image locally and then reference the local image, it has no problem with analysing the image.
Paths starting with \\ will not load the image.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.0
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7553/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8002
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8002/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8002/comments
|
https://api.github.com/repos/ollama/ollama/issues/8002/events
|
https://github.com/ollama/ollama/pull/8002
| 2,725,706,888
|
PR_kwDOJ0Z1Ps6Edzde
| 8,002
|
llama: preserve field order in user-defined JSON schemas
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-12-09T01:14:51
| 2024-12-11T22:07:32
| 2024-12-11T22:07:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8002",
"html_url": "https://github.com/ollama/ollama/pull/8002",
"diff_url": "https://github.com/ollama/ollama/pull/8002.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8002.patch",
"merged_at": "2024-12-11T22:07:30"
}
|
llama: preserve field order in user-defined JSON schemas
Previously we decoded and re-encoded JSON schemas during validation,
which served no purpose since json.RawMessage already validates JSON
syntax. Worse, the re-encoding lost field ordering from the original
schema, which affects inference quality during step-by-step reasoning.
While fixing this ordering issue by using json.RawMessage directly,
testing revealed that schema_to_grammar (from llama.cpp) also fails to
preserve field order during grammar generation. This appears to be the
root cause of inference degradation.
This change prevents us from mangling the user's original schema order,
but we still need to address the ordering issue in schema_to_grammar.
That will be a separate change.
Updates #7978
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8002/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8002/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5985
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5985/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5985/comments
|
https://api.github.com/repos/ollama/ollama/issues/5985/events
|
https://github.com/ollama/ollama/pull/5985
| 2,432,324,831
|
PR_kwDOJ0Z1Ps52leCz
| 5,985
|
Use llama3.1 in tools example
|
{
"login": "rgbkrk",
"id": 836375,
"node_id": "MDQ6VXNlcjgzNjM3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/836375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rgbkrk",
"html_url": "https://github.com/rgbkrk",
"followers_url": "https://api.github.com/users/rgbkrk/followers",
"following_url": "https://api.github.com/users/rgbkrk/following{/other_user}",
"gists_url": "https://api.github.com/users/rgbkrk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rgbkrk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rgbkrk/subscriptions",
"organizations_url": "https://api.github.com/users/rgbkrk/orgs",
"repos_url": "https://api.github.com/users/rgbkrk/repos",
"events_url": "https://api.github.com/users/rgbkrk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rgbkrk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-26T14:07:11
| 2024-08-08T01:31:50
| 2024-08-07T21:20:51
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5985",
"html_url": "https://github.com/ollama/ollama/pull/5985",
"diff_url": "https://github.com/ollama/ollama/pull/5985.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5985.patch",
"merged_at": "2024-08-07T21:20:50"
}
|
Running this example with `mistral` produces the error "mistral does not support tools". What wasn't obvious to me until I made this PR was that my copy of mistral needed upgrading for tools (`ollama pull mistral`). Making the example be `llama3.1` will lead to more success for other long time ollama users.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5985/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6319
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6319/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6319/comments
|
https://api.github.com/repos/ollama/ollama/issues/6319/events
|
https://github.com/ollama/ollama/issues/6319
| 2,460,663,129
|
I_kwDOJ0Z1Ps6Sqr1Z
| 6,319
|
Models RuGPT3, RuBERT
|
{
"login": "DewiarQR",
"id": 64423698,
"node_id": "MDQ6VXNlcjY0NDIzNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/64423698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DewiarQR",
"html_url": "https://github.com/DewiarQR",
"followers_url": "https://api.github.com/users/DewiarQR/followers",
"following_url": "https://api.github.com/users/DewiarQR/following{/other_user}",
"gists_url": "https://api.github.com/users/DewiarQR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DewiarQR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DewiarQR/subscriptions",
"organizations_url": "https://api.github.com/users/DewiarQR/orgs",
"repos_url": "https://api.github.com/users/DewiarQR/repos",
"events_url": "https://api.github.com/users/DewiarQR/events{/privacy}",
"received_events_url": "https://api.github.com/users/DewiarQR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 2
| 2024-08-12T10:51:26
| 2024-12-21T10:38:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
All models currently have pretty poor Russian language support. Is it possible to add RuGPT3, RuBERT models?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6319/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6319/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/766
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/766/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/766/comments
|
https://api.github.com/repos/ollama/ollama/issues/766/events
|
https://github.com/ollama/ollama/issues/766
| 1,939,890,658
|
I_kwDOJ0Z1Ps5zoGHi
| 766
|
Release mac and linux binaries alongside the desktop packages
|
{
"login": "Clivern",
"id": 1634427,
"node_id": "MDQ6VXNlcjE2MzQ0Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1634427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Clivern",
"html_url": "https://github.com/Clivern",
"followers_url": "https://api.github.com/users/Clivern/followers",
"following_url": "https://api.github.com/users/Clivern/following{/other_user}",
"gists_url": "https://api.github.com/users/Clivern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Clivern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Clivern/subscriptions",
"organizations_url": "https://api.github.com/users/Clivern/orgs",
"repos_url": "https://api.github.com/users/Clivern/repos",
"events_url": "https://api.github.com/users/Clivern/events{/privacy}",
"received_events_url": "https://api.github.com/users/Clivern/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-10-12T12:30:39
| 2023-11-11T22:56:39
| 2023-10-12T16:07:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Maybe ollama is intended to be a desktop app but I believe a lot are using it as an API service.
Honestly i couldn't get it to work as desktop app on Intel Mac but works as API service. Assuming the 500% spike in cpu usage is expected with each prompt and model pulling on a decent mac. I guess i need to give it a try on better machine.
Anyways I gave it a try with goreleaser on a fork https://github.com/Clivern/ollama/commit/ef2d0da969343ba1d5fef0b8777f5cca056c1be1.
It failed to publish to github but maybe because it is a fork! I need to debug but are you interested in such a thing?
Also goreleaser doesn't append to the release, it publish the changelog, binaries ... etc but then desktop packages can be added. it sound they are done manually.
|
{
"login": "Clivern",
"id": 1634427,
"node_id": "MDQ6VXNlcjE2MzQ0Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1634427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Clivern",
"html_url": "https://github.com/Clivern",
"followers_url": "https://api.github.com/users/Clivern/followers",
"following_url": "https://api.github.com/users/Clivern/following{/other_user}",
"gists_url": "https://api.github.com/users/Clivern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Clivern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Clivern/subscriptions",
"organizations_url": "https://api.github.com/users/Clivern/orgs",
"repos_url": "https://api.github.com/users/Clivern/repos",
"events_url": "https://api.github.com/users/Clivern/events{/privacy}",
"received_events_url": "https://api.github.com/users/Clivern/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/766/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6293
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6293/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6293/comments
|
https://api.github.com/repos/ollama/ollama/issues/6293/events
|
https://github.com/ollama/ollama/issues/6293
| 2,458,713,929
|
I_kwDOJ0Z1Ps6SjP9J
| 6,293
|
"The model you are attempting to pull requires a newer version of Ollama" when Ollama is built from the latest source
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-08-09T21:59:06
| 2024-08-15T02:32:17
| 2024-08-09T22:31:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When trying to pull models from the official Ollama registry - if you're building Ollama from source it now seems to fail with an error that your Ollama version is too old.
```
ollama pull llama3.1:8b-instruct-q8_0
pulling manifest
Error: pull model manifest: 412:
The model you are attempting to pull requires a newer version of Ollama.
Please download the latest version at:
https://ollama.com/download
```
However I highly doubt my Ollama version is too old as I bought it from source every day.
```
ollama --version
ollama version is e9aa5117c409c94861af1c50b246f29a72d05147
```
### OS
Linux, Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
e9aa5117c409c94861af1c50b246f29a72d05147
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6293/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6293/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/269
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/269/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/269/comments
|
https://api.github.com/repos/ollama/ollama/issues/269/events
|
https://github.com/ollama/ollama/issues/269
| 1,835,339,821
|
I_kwDOJ0Z1Ps5tZRAt
| 269
|
Pressing enter during `ollama pull` causes newlines to be printed repeatedly
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-08-03T16:02:52
| 2023-12-24T21:39:30
| 2023-12-24T21:39:30
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
<img width="1531" alt="Screenshot 2023-08-03 at 11 59 21 AM" src="https://github.com/jmorganca/ollama/assets/251292/1e782cfa-75f2-4bc3-84da-567c685ef36c">
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/269/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1251
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1251/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1251/comments
|
https://api.github.com/repos/ollama/ollama/issues/1251/events
|
https://github.com/ollama/ollama/issues/1251
| 2,007,405,923
|
I_kwDOJ0Z1Ps53ppVj
| 1,251
|
How can I disable automatic model offloading from GPU memory
|
{
"login": "anan-dad",
"id": 30836142,
"node_id": "MDQ6VXNlcjMwODM2MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/30836142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anan-dad",
"html_url": "https://github.com/anan-dad",
"followers_url": "https://api.github.com/users/anan-dad/followers",
"following_url": "https://api.github.com/users/anan-dad/following{/other_user}",
"gists_url": "https://api.github.com/users/anan-dad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anan-dad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anan-dad/subscriptions",
"organizations_url": "https://api.github.com/users/anan-dad/orgs",
"repos_url": "https://api.github.com/users/anan-dad/repos",
"events_url": "https://api.github.com/users/anan-dad/events{/privacy}",
"received_events_url": "https://api.github.com/users/anan-dad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-23T02:58:48
| 2023-11-23T03:05:50
| 2023-11-23T03:05:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
First of all, thank you for your great work with ollama!
I found that ollama will automatically offload models from GPU memory (very frequently, even after 2-minute inactive use).
But the loading process takes too much time, how can I forge ollama keep the model loading in GPU memory?
Thanks
|
{
"login": "anan-dad",
"id": 30836142,
"node_id": "MDQ6VXNlcjMwODM2MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/30836142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anan-dad",
"html_url": "https://github.com/anan-dad",
"followers_url": "https://api.github.com/users/anan-dad/followers",
"following_url": "https://api.github.com/users/anan-dad/following{/other_user}",
"gists_url": "https://api.github.com/users/anan-dad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anan-dad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anan-dad/subscriptions",
"organizations_url": "https://api.github.com/users/anan-dad/orgs",
"repos_url": "https://api.github.com/users/anan-dad/repos",
"events_url": "https://api.github.com/users/anan-dad/events{/privacy}",
"received_events_url": "https://api.github.com/users/anan-dad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1251/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2175
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2175/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2175/comments
|
https://api.github.com/repos/ollama/ollama/issues/2175/events
|
https://github.com/ollama/ollama/pull/2175
| 2,098,909,748
|
PR_kwDOJ0Z1Ps5k_KNU
| 2,175
|
refactor tensor read
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-24T19:10:03
| 2024-01-25T17:22:43
| 2024-01-25T17:22:42
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2175",
"html_url": "https://github.com/ollama/ollama/pull/2175",
"diff_url": "https://github.com/ollama/ollama/pull/2175.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2175.patch",
"merged_at": "2024-01-25T17:22:42"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2175/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8486
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8486/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8486/comments
|
https://api.github.com/repos/ollama/ollama/issues/8486/events
|
https://github.com/ollama/ollama/issues/8486
| 2,797,681,012
|
I_kwDOJ0Z1Ps6mwTl0
| 8,486
|
Add Tool Calling to the Generate Function
|
{
"login": "twright-0x1",
"id": 13889385,
"node_id": "MDQ6VXNlcjEzODg5Mzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/13889385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/twright-0x1",
"html_url": "https://github.com/twright-0x1",
"followers_url": "https://api.github.com/users/twright-0x1/followers",
"following_url": "https://api.github.com/users/twright-0x1/following{/other_user}",
"gists_url": "https://api.github.com/users/twright-0x1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/twright-0x1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/twright-0x1/subscriptions",
"organizations_url": "https://api.github.com/users/twright-0x1/orgs",
"repos_url": "https://api.github.com/users/twright-0x1/repos",
"events_url": "https://api.github.com/users/twright-0x1/events{/privacy}",
"received_events_url": "https://api.github.com/users/twright-0x1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-19T15:21:05
| 2025-01-19T15:21:05
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It appears from the API documentation and code examples available that tool calling is only possible with chat(). If this capability is feasible to add to generate() it would be much appreciated!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8486/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8486/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2908
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2908/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2908/comments
|
https://api.github.com/repos/ollama/ollama/issues/2908/events
|
https://github.com/ollama/ollama/issues/2908
| 2,166,302,209
|
I_kwDOJ0Z1Ps6BHyYB
| 2,908
|
How to specify the installation directory
|
{
"login": "yuanjie-ai",
"id": 20265321,
"node_id": "MDQ6VXNlcjIwMjY1MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/20265321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanjie-ai",
"html_url": "https://github.com/yuanjie-ai",
"followers_url": "https://api.github.com/users/yuanjie-ai/followers",
"following_url": "https://api.github.com/users/yuanjie-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanjie-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanjie-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanjie-ai/subscriptions",
"organizations_url": "https://api.github.com/users/yuanjie-ai/orgs",
"repos_url": "https://api.github.com/users/yuanjie-ai/repos",
"events_url": "https://api.github.com/users/yuanjie-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanjie-ai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-03-04T08:56:56
| 2024-05-26T09:21:51
| 2024-03-21T11:36:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How to specify the installation directory
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2908/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4135
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4135/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4135/comments
|
https://api.github.com/repos/ollama/ollama/issues/4135/events
|
https://github.com/ollama/ollama/pull/4135
| 2,278,248,414
|
PR_kwDOJ0Z1Ps5ugQRD
| 4,135
|
Skip PhysX cudart library
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-03T18:56:55
| 2024-05-06T20:34:03
| 2024-05-06T20:34:00
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4135",
"html_url": "https://github.com/ollama/ollama/pull/4135",
"diff_url": "https://github.com/ollama/ollama/pull/4135.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4135.patch",
"merged_at": "2024-05-06T20:34:00"
}
|
For some reason this library gives incorrect GPU information, so skip it
I'm not convinced yet this is the optimal fix, but queuing this up in case we get ready to cut a new release and haven't found a better solution yet.
Fixes #4008
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4135/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3089
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3089/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3089/comments
|
https://api.github.com/repos/ollama/ollama/issues/3089/events
|
https://github.com/ollama/ollama/issues/3089
| 2,182,976,031
|
I_kwDOJ0Z1Ps6CHZIf
| 3,089
|
Error when requesting ollama api from another pc (windows)
|
{
"login": "insooneelife",
"id": 8437769,
"node_id": "MDQ6VXNlcjg0Mzc3Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8437769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/insooneelife",
"html_url": "https://github.com/insooneelife",
"followers_url": "https://api.github.com/users/insooneelife/followers",
"following_url": "https://api.github.com/users/insooneelife/following{/other_user}",
"gists_url": "https://api.github.com/users/insooneelife/gists{/gist_id}",
"starred_url": "https://api.github.com/users/insooneelife/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/insooneelife/subscriptions",
"organizations_url": "https://api.github.com/users/insooneelife/orgs",
"repos_url": "https://api.github.com/users/insooneelife/repos",
"events_url": "https://api.github.com/users/insooneelife/events{/privacy}",
"received_events_url": "https://api.github.com/users/insooneelife/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 16
| 2024-03-13T02:06:05
| 2024-05-13T21:15:11
| 2024-03-15T13:36:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I plan to set up ollama on another PC and proceed with the work on the current PC.
However, when sending a request to ollama from a PC, I entered the IP address of the PC and sent it, but there is no reply.
Can you tell me what the problem is?
request url
http://localhost:11434/api/chat -> http://172.168.10.1:11434/api/chat
|
{
"login": "insooneelife",
"id": 8437769,
"node_id": "MDQ6VXNlcjg0Mzc3Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8437769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/insooneelife",
"html_url": "https://github.com/insooneelife",
"followers_url": "https://api.github.com/users/insooneelife/followers",
"following_url": "https://api.github.com/users/insooneelife/following{/other_user}",
"gists_url": "https://api.github.com/users/insooneelife/gists{/gist_id}",
"starred_url": "https://api.github.com/users/insooneelife/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/insooneelife/subscriptions",
"organizations_url": "https://api.github.com/users/insooneelife/orgs",
"repos_url": "https://api.github.com/users/insooneelife/repos",
"events_url": "https://api.github.com/users/insooneelife/events{/privacy}",
"received_events_url": "https://api.github.com/users/insooneelife/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3089/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5452
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5452/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5452/comments
|
https://api.github.com/repos/ollama/ollama/issues/5452/events
|
https://github.com/ollama/ollama/issues/5452
| 2,387,458,446
|
I_kwDOJ0Z1Ps6OTbmO
| 5,452
|
MARKDOWN!!
|
{
"login": "ashercn97",
"id": 131724380,
"node_id": "U_kgDOB9n0XA",
"avatar_url": "https://avatars.githubusercontent.com/u/131724380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashercn97",
"html_url": "https://github.com/ashercn97",
"followers_url": "https://api.github.com/users/ashercn97/followers",
"following_url": "https://api.github.com/users/ashercn97/following{/other_user}",
"gists_url": "https://api.github.com/users/ashercn97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashercn97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashercn97/subscriptions",
"organizations_url": "https://api.github.com/users/ashercn97/orgs",
"repos_url": "https://api.github.com/users/ashercn97/repos",
"events_url": "https://api.github.com/users/ashercn97/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashercn97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 5
| 2024-07-03T01:42:42
| 2024-10-17T17:32:45
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I think it would be so cool if this could render markdown in the terminal. It is kind of hard to read some of the stuff, and I would love if it could use something like glow or mdcat. Thanks!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5452/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5452/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2168
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2168/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2168/comments
|
https://api.github.com/repos/ollama/ollama/issues/2168/events
|
https://github.com/ollama/ollama/issues/2168
| 2,097,815,632
|
I_kwDOJ0Z1Ps59CiBQ
| 2,168
|
Issues Running Ollama Container Behind Proxy - No Error Logs Found
|
{
"login": "OM-EL",
"id": 36996895,
"node_id": "MDQ6VXNlcjM2OTk2ODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/36996895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OM-EL",
"html_url": "https://github.com/OM-EL",
"followers_url": "https://api.github.com/users/OM-EL/followers",
"following_url": "https://api.github.com/users/OM-EL/following{/other_user}",
"gists_url": "https://api.github.com/users/OM-EL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OM-EL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OM-EL/subscriptions",
"organizations_url": "https://api.github.com/users/OM-EL/orgs",
"repos_url": "https://api.github.com/users/OM-EL/repos",
"events_url": "https://api.github.com/users/OM-EL/events{/privacy}",
"received_events_url": "https://api.github.com/users/OM-EL/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 10
| 2024-01-24T09:26:08
| 2024-10-17T07:06:36
| 2024-03-11T19:02:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm encountering issues while trying to run an Ollama container behind a proxy. Here are the steps I've taken and the issues I've faced:
1. **Creating an Image with Certificate**:
```
cat Dockerfile
FROM ollama/ollama
COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt
RUN update-ca-certificates
```
2. **Starting a Container Using This Image with Proxy Variables Injected**:
```
docker run -d \
-e HTTPS_PROXY=http://x.x.x.x:3128 \
-e HTTP_PROXY=http://x.x.x.x:3128 \
-e http_proxy=http://x.x.x.x:3128 \
-e https_proxy=http://x.x.x.x:3128 \
-p 11434:11434 ollama-with-ca
```
3. **Inside the Container**:
- Ran `apt-get update` to confirm internet access and proper proxy functionality.
- Executed `ollama pull mistral` and `ollama run mistral:instruct`, but consistently encountered the error: "Error: something went wrong, please see the Ollama server logs for details."
- Container logs (`docker logs 8405972b3d6b`) showed no errors, only the following information:
```
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194
2024/01/24 08:40:55 images.go:808: total blobs: 0
2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0
2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20)
2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda]
2024/01/24 08:40:56 gpu.go:88: Detecting GPU type
2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library libnvidia-ml.so
2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: []
2024/01/24 08:40:56 gpu.go:203: Searching for GPU management library librocm_smi64.so
2024/01/24 08:40:56 gpu.go:248: Discovered GPU libraries: []
2024/01/24 08:40:56 routes.go:953: no GPU detected
```
4. **Using Wget to Download the Model**:
- Successfully downloaded "mistral-7b-instruct-v0.1.Q5_K_M.gguf" via `wget`.
- Created a simple ModelFile:
```
FROM /home/mistral-7b-instruct-v0.1.Q5_K_M.gguf
```
- Executed `ollama create mistralModel -f Modelfile`, resulting in the same error: "Error: something went wrong, please see the Ollama server logs for details."
- The logs from `docker logs 8405972b3d6b` again showed no error:
```
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDppYjymfVcdtDNT/umLfrzlIx1QquQ/gTuSI7SAV194
2024/01/24 08:40:55 images.go:808: total blobs: 0
2024/01/24 08:40:55 images.go:815: total unused blobs removed: 0
2024/01/24 08:40:55 routes.go:930: Listening on [::]:11434 (version 0.1.20)
2024/01/24 08:40:56 shim_ext_server.go:142: Dynamic LLM variants [cuda]
2024/01/24 08:40:56 gpu.go:88: Detecting GPU type
When Making a http request on the ollama server in my Navigator i get an "Ollama running"
i also found that even the "ollama list"
gives the same error " Error: something went wrong, please see the ollama server logs for details " ans still no logs.
i did not find any logs in the files where Ollama saves logs , the only logs are the docker logs , and they contain nothing
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2168/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2168/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/5988
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5988/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5988/comments
|
https://api.github.com/repos/ollama/ollama/issues/5988/events
|
https://github.com/ollama/ollama/issues/5988
| 2,432,490,642
|
I_kwDOJ0Z1Ps6Q_NyS
| 5,988
|
GPU with 12GB VRAM couldn't load 8B model under WSL2
|
{
"login": "hoangminh1109",
"id": 20716428,
"node_id": "MDQ6VXNlcjIwNzE2NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/20716428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoangminh1109",
"html_url": "https://github.com/hoangminh1109",
"followers_url": "https://api.github.com/users/hoangminh1109/followers",
"following_url": "https://api.github.com/users/hoangminh1109/following{/other_user}",
"gists_url": "https://api.github.com/users/hoangminh1109/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoangminh1109/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoangminh1109/subscriptions",
"organizations_url": "https://api.github.com/users/hoangminh1109/orgs",
"repos_url": "https://api.github.com/users/hoangminh1109/repos",
"events_url": "https://api.github.com/users/hoangminh1109/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoangminh1109/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677675697,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgU-sQ",
"url": "https://api.github.com/repos/ollama/ollama/labels/wsl",
"name": "wsl",
"color": "7E0821",
"default": false,
"description": "Issues using WSL"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-07-26T15:37:39
| 2024-08-03T10:55:00
| 2024-08-03T10:55:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm unable to run any of the small model (8B model) on my RTX 3060 12GB.
Ollama is installed in WSL2 under Win10.

Server log uploaded [ollama_log_error.txt](https://github.com/user-attachments/files/16393770/ollama_log_error.txt)
Some more information:
- nvidia-smi works well.
- cuda installed, cuda example deviceQuery works well.
### OS
WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.0
|
{
"login": "hoangminh1109",
"id": 20716428,
"node_id": "MDQ6VXNlcjIwNzE2NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/20716428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoangminh1109",
"html_url": "https://github.com/hoangminh1109",
"followers_url": "https://api.github.com/users/hoangminh1109/followers",
"following_url": "https://api.github.com/users/hoangminh1109/following{/other_user}",
"gists_url": "https://api.github.com/users/hoangminh1109/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoangminh1109/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoangminh1109/subscriptions",
"organizations_url": "https://api.github.com/users/hoangminh1109/orgs",
"repos_url": "https://api.github.com/users/hoangminh1109/repos",
"events_url": "https://api.github.com/users/hoangminh1109/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoangminh1109/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5988/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3742
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3742/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3742/comments
|
https://api.github.com/repos/ollama/ollama/issues/3742/events
|
https://github.com/ollama/ollama/issues/3742
| 2,251,904,090
|
I_kwDOJ0Z1Ps6GOVRa
| 3,742
|
Slow Performance with Llama2 on a Dual-GPU System - Seeking Advice
|
{
"login": "AkiMatsushita",
"id": 5045321,
"node_id": "MDQ6VXNlcjUwNDUzMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5045321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AkiMatsushita",
"html_url": "https://github.com/AkiMatsushita",
"followers_url": "https://api.github.com/users/AkiMatsushita/followers",
"following_url": "https://api.github.com/users/AkiMatsushita/following{/other_user}",
"gists_url": "https://api.github.com/users/AkiMatsushita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AkiMatsushita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AkiMatsushita/subscriptions",
"organizations_url": "https://api.github.com/users/AkiMatsushita/orgs",
"repos_url": "https://api.github.com/users/AkiMatsushita/repos",
"events_url": "https://api.github.com/users/AkiMatsushita/events{/privacy}",
"received_events_url": "https://api.github.com/users/AkiMatsushita/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-04-19T01:41:32
| 2024-04-22T22:41:38
| 2024-04-22T22:39:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello ollama Community,
I'm encountering extremely slow performance while running ollama on my PC, specifically with models like Llama2 13B. The issue isn't just the slow output speed (around 1 token/min), but I'm also concerned that my GPUs might not be utilized properly. Below are my PC specs:
- CPU: Intel Core i7 12650H
- Memory: 32GB
- GPU0: Intel UHD Graphics
- GPU1: NVIDIA GeForce RTX 4060 with 8GB VRAM
When running models such as the Llama2 13B, the performance drastically slows down. Interestingly, out of the 8GB VRAM, only about 6.1GB is being used, and the GPU utilization rate is close to 0%.
For comparison, I've also tried running larger models like Llama2 70B on a different PC equipped with a GeForce RTX 4060ti with 16GB VRAM. In this case, almost all of the VRAM is utilized, and the GPU utilization rate reaches about 10%.
I'm wondering if the issue with the first PC might be related to it having two GPUs, which could be causing incorrect GPU utilization, or if it's simply a matter of insufficient VRAM.
Could anyone please advise on whether this is an issue with GPU utilization due to the dual-GPU setup or if the VRAM is indeed insufficient? Any insights or suggestions would be greatly appreciated.
Thank you!
|
{
"login": "AkiMatsushita",
"id": 5045321,
"node_id": "MDQ6VXNlcjUwNDUzMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5045321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AkiMatsushita",
"html_url": "https://github.com/AkiMatsushita",
"followers_url": "https://api.github.com/users/AkiMatsushita/followers",
"following_url": "https://api.github.com/users/AkiMatsushita/following{/other_user}",
"gists_url": "https://api.github.com/users/AkiMatsushita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AkiMatsushita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AkiMatsushita/subscriptions",
"organizations_url": "https://api.github.com/users/AkiMatsushita/orgs",
"repos_url": "https://api.github.com/users/AkiMatsushita/repos",
"events_url": "https://api.github.com/users/AkiMatsushita/events{/privacy}",
"received_events_url": "https://api.github.com/users/AkiMatsushita/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3742/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5912
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5912/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5912/comments
|
https://api.github.com/repos/ollama/ollama/issues/5912/events
|
https://github.com/ollama/ollama/pull/5912
| 2,427,590,112
|
PR_kwDOJ0Z1Ps52V55j
| 5,912
|
Server tls 3203
|
{
"login": "gabe-l-hart",
"id": 1254484,
"node_id": "MDQ6VXNlcjEyNTQ0ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1254484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabe-l-hart",
"html_url": "https://github.com/gabe-l-hart",
"followers_url": "https://api.github.com/users/gabe-l-hart/followers",
"following_url": "https://api.github.com/users/gabe-l-hart/following{/other_user}",
"gists_url": "https://api.github.com/users/gabe-l-hart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabe-l-hart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabe-l-hart/subscriptions",
"organizations_url": "https://api.github.com/users/gabe-l-hart/orgs",
"repos_url": "https://api.github.com/users/gabe-l-hart/repos",
"events_url": "https://api.github.com/users/gabe-l-hart/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabe-l-hart/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-24T13:25:29
| 2024-10-03T16:04:14
| 2024-10-03T16:04:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5912",
"html_url": "https://github.com/ollama/ollama/pull/5912",
"diff_url": "https://github.com/ollama/ollama/pull/5912.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5912.patch",
"merged_at": null
}
|
**Disclaimer!**
This PR started as a small feature addition and resulted in some significant scope creep when I added the unit tests. I'm certainly open to trying to remove some of that refactoring for `ServerNonBlocking` if that's preferred, but figured it was worth presenting as-is to start the discussion.
## Issues
This issue supports https://github.com/ollama/ollama/issues/3203 by adding encryption between client and server. It does not fully address the issue since the core feature request is for auth.
## Description
This PR adds support for running the primary `ollama` server/client interactions using TLS and Mutual TLS (mTLS). It does not address encryption between the `ollama` server and the individual model servers.
## Changes
The changes in this PR are grouped as follows:
### New `envconfig` variables:
* To boot an (m)TLS server:
* `OLLAMA_HOST`: If the `scheme` is `https://`, the server will attempt to boot with TLS or mTLS based on the presence of the below variables
* `OLLAMA_TLS_SERVER_KEY`: File with the private key (required for TLS)
* `OLLAMA_TLS_SERVER_CERT`: File with the public cert (required for TLS)
* `OLLAMA_TLS_CLIENT_CA`: File with the CA cert for the key/cert pair the client will use (required for mTLS)
* To connect a client to an (m)TLS server:
* `OLLAMA_HOST`: If the `scheme` is `https://`, the client will attempt to connect to a (m)TLS server depending on the presence of the below variables
* `OLLAMA_TLS_SERVER_CA`: File with the CA cert for the server's key/cert pair (required for TLS if the server's key/cert pair is signed by a non-system CA. If not given, but the `scheme` is `https://`, the system CAs will be used.)
* `OLLAMA_TLS_CLIENT_KEY`: File with the private key for the client when connecting to an mTLS server (required for mTLS)
* `OLLAMA_TLS_CLIENT_CERT`: File with the public cert for the client when connecting to an mTLS server (required for mTLS)
### Config parsing:
* In `envconfig`, there is a new `getTlsConfig` function which parses all of the TLS-related variables for both client and server
* `getTlsConfig` is used to populate `envconfig.ServerTlsConfig` and `envconfig.ClientTlsconfig` with [tls.Config](https://pkg.go.dev/crypto/tls#Config) objects if configured to use (m)TLS
* If not configured for TLS, these objects will remain `nil` which is the indicator elsewhere in the code that TLS is not enabled
### Server Setup
The primary change in `routes.go` is to add a conditional around calling the `ServeTLS` function on the `http.Server` object based on the value of `envconfig.ServerTlsConfig`.
The rest of the changes there were all made in support of helping to make the server easier to boot in unit tests. For that, I split the `Serve` function into two parts: `ServeNonBlocking` which returns an instance of the `server.Server` struct, and `Serve` which uses `ServeNonBlocking` and then blocks on the server terminating.
### Client Setup
In `client.go`, the change is to look at `envconfig.ClientTlsConfig` and instantiate the `http.Client` accordingly
### Unit Testing
* Add a new testing package `envconfig/configtest` that holds helpers for dynamically generating TLS data
* Extend the `envconfig` tests to test the parsing of config data
* Add a new `server/mtls_test.go` test suite that tests the server/client communication with no TLS, standard TLS, and mTLS
## Testing
In addition to the unit tests, I've also verified that the communication works separately using scripts I have from other projects for generating self-signed mTLS data. Here are the steps I used:
<details>
<summary>gen_mtls_test_files.sh</summary>
```sh
#!/usr/bin/env bash
## Config ######################################################################
# Optional additional SANs can be set with SANS
SANS=${SANS:-""}
# CN can be overloaded
CN=${CN:-"foo.bar.com"}
set -eo pipefail
# Set up additional SANs block
IFS=' ' read -r -a sans_arr <<< "$SANS"
extra_sans=""
counter="1"
for san in "${sans_arr[@]}"
do
counter=$(expr "$counter" "+" "1")
echo "Adding SAN DNS.$counter [$san]"
extra_sans="$extra_sans\nDNS.$counter = $san"
done
root_name="ca"
root_key="$root_name.key.pem"
root_crt="$root_name.cert.pem"
server_name="server"
server_key="$server_name.key.pem"
server_crt="$server_name.cert.pem"
client_name="client"
client_key="$client_name.key.pem"
client_crt="$client_name.cert.pem"
common_config='
[req]
default_bits = 4096
default_keyfile = server.key.pem
distinguished_name = req_distinguished_name
x509_extensions = x509_ext
string_mask = utf8only
[req_distinguished_name]
countryName = Country Name (2 letter code)
countryName_default = US
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Denver
localityName = Locality Name (eg, city)
localityName_default = Denver
organizationName = Organization Name (eg, company)
organizationName_default = Gabe Inc
commonName = Common Name (eg, YOUR name)
commonName_max = 64
'
ca_config="
$common_config
[x509_ext]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always, issuer
basicConstraints = critical, CA:TRUE, pathlen:1
keyUsage = keyCertSign, cRLSign
"
derived_config="
$common_config
[x509_ext]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
$extra_sans
IP.1 = '127.0.0.1'
"
# use wild card in subject, not all clients accept that, but Java grpc client does
# we also have subject alternative names 127.0.0.1 and localhost in our openssl.cnf file (used when creating the server crt)
SUBJ="/C=US/ST=Denver/L=Denver/O=Gabe Inc/CN=$CN"
# Set the expiration for 10 years
expiration_days=3650
## Root ########################################################################
# Create the root key
echo "[Creating root key]"
openssl genrsa -out $root_key 2048
# create root key and cert
echo "[Creating root cert]"
openssl req \
-config <(echo -e "$ca_config") \
-x509 \
-new \
-nodes \
-key $root_key \
-sha256 \
-subj "$SUBJ" \
-out $root_crt
## Server ######################################################################
# create a new server key and certificate signing request
echo "[Creating server key and signing request]"
openssl req -config <(echo -e "$derived_config") -new -nodes -sha256 -keyout $server_key -out $server_name.csr -newkey rsa:2048 -subj "$SUBJ"
# sign the request with our root cert key
echo "[Sign server cert]"
openssl x509 -req -sha256 -extfile <(echo -e "$derived_config") -extensions x509_ext -in $server_name.csr -CA $root_crt -CAkey $root_key -CAcreateserial -out $server_crt -days $expiration_days
# write out server key in pkcs8 format, required by grpc
echo "[Convert server key to pkcs8]"
cp $server_key $server_key.tmp
openssl pkcs8 -topk8 -nocrypt -in $server_key.tmp -out $server_key
## Client ######################################################################
# create a new server key and certificate signing request
echo "[Creating client key and signing request]"
openssl req -config <(echo -e "$derived_config") -new -nodes -sha256 -keyout $client_key -out $client_name.csr -newkey rsa:2048 -subj "$SUBJ"
# sign the request with our root cert key
echo "[Sign client cert]"
openssl x509 -req -sha256 -extfile <(echo -e "$derived_config") -extensions x509_ext -in $client_name.csr -CA $root_crt -CAkey $root_key -CAcreateserial -out $client_crt -days $expiration_days
# write out client key in pkcs8 format, required by grpc
echo "[Convert client key to pkcs8]"
cp $client_key $client_key.tmp
openssl pkcs8 -topk8 -nocrypt -in $client_key.tmp -out $client_key
# Clean up
rm *.tmp
rm *.csr
rm *.srl
```
</details>
```sh
# Using the above gen_mtls_test_files.sh
gen_mtls_test_files.sh
# Build it
go build .
# Boot the server with mTLS enabled
OLLAMA_HOST="https://localhost:54321" \
OLLAMA_TLS_SERVER_KEY=server.key.pem \
OLLAMA_TLS_SERVER_CERT=server.cert.pem \
OLLAMA_TLS_CLIENT_CA=ca.cert.pem \
./ollama serve
# Run a client command with mTLS enabled (separate terminal or background the server)
OLLAMA_HOST="https://127.0.0.1:54321" \
OLLAMA_TLS_SERVER_CA=ca.cert.pem \
OLLAMA_TLS_CLIENT_KEY=client.key.pem \
OLLAMA_TLS_CLIENT_CERT=client.cert.pem \
./ollama ls
```
|
{
"login": "gabe-l-hart",
"id": 1254484,
"node_id": "MDQ6VXNlcjEyNTQ0ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1254484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabe-l-hart",
"html_url": "https://github.com/gabe-l-hart",
"followers_url": "https://api.github.com/users/gabe-l-hart/followers",
"following_url": "https://api.github.com/users/gabe-l-hart/following{/other_user}",
"gists_url": "https://api.github.com/users/gabe-l-hart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabe-l-hart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabe-l-hart/subscriptions",
"organizations_url": "https://api.github.com/users/gabe-l-hart/orgs",
"repos_url": "https://api.github.com/users/gabe-l-hart/repos",
"events_url": "https://api.github.com/users/gabe-l-hart/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabe-l-hart/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5912/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5912/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/236
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/236/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/236/comments
|
https://api.github.com/repos/ollama/ollama/issues/236/events
|
https://github.com/ollama/ollama/pull/236
| 1,826,971,486
|
PR_kwDOJ0Z1Ps5WryWE
| 236
|
check os.Walk err
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-28T19:15:46
| 2023-07-28T21:14:22
| 2023-07-28T21:14:21
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/236",
"html_url": "https://github.com/ollama/ollama/pull/236",
"diff_url": "https://github.com/ollama/ollama/pull/236.diff",
"patch_url": "https://github.com/ollama/ollama/pull/236.patch",
"merged_at": "2023-07-28T21:14:21"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/236/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1541
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1541/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1541/comments
|
https://api.github.com/repos/ollama/ollama/issues/1541/events
|
https://github.com/ollama/ollama/pull/1541
| 2,042,976,921
|
PR_kwDOJ0Z1Ps5iEfYP
| 1,541
|
add API create/copy handlers
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-15T06:20:11
| 2023-12-15T19:59:19
| 2023-12-15T19:59:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1541",
"html_url": "https://github.com/ollama/ollama/pull/1541",
"diff_url": "https://github.com/ollama/ollama/pull/1541.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1541.patch",
"merged_at": "2023-12-15T19:59:18"
}
|
This change adds a test for calling `POST /api/create` which creates a new model.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1541/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1889
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1889/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1889/comments
|
https://api.github.com/repos/ollama/ollama/issues/1889/events
|
https://github.com/ollama/ollama/issues/1889
| 2,074,013,731
|
I_kwDOJ0Z1Ps57nvAj
| 1,889
|
Phi2/dolphin-phi Disobedient on system prompt Biblical topics:
|
{
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/oliverbob/followers",
"following_url": "https://api.github.com/users/oliverbob/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverbob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverbob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverbob/subscriptions",
"organizations_url": "https://api.github.com/users/oliverbob/orgs",
"repos_url": "https://api.github.com/users/oliverbob/repos",
"events_url": "https://api.github.com/users/oliverbob/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverbob/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-01-10T10:02:39
| 2024-05-10T00:16:11
| 2024-05-10T00:16:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Steps to reproduce:
Download a new Bible Dataset from [KJV Markdown .md](https://github.com/arleym/kjv-markdown/tree/master
)
```
#!/bin/bash
sudo rm joined.md
# Prepend content to the joined.md file
echo "FROM dolphin-phi" >> ./joined.md
echo "# set the temperature to 1 [higher is more creative, lower is more coherent]" >> ./joined.md
echo "PARAMETER temperature 1" >> ./joined.md
echo 'SYSTEM """' >> ./joined.md
echo 'Instruction: Modelfile Structure Understanding' >> ./joined.md
echo 'The Modelfile follows a structure similar to the Bible, with books, chapters, and verses.' >> ./joined.md
echo 'For example, here are excerpts from the first and second chapters of Genesis:' >> ./joined.md
echo '' >> ./joined.md
echo 'Genesis' >> ./joined.md
echo 'Genesis Chapter 1' >> ./joined.md
echo 'Genesis 1:1 "In the beginning God created the heaven and the earth."' >> ./joined.md
echo 'Genesis 1:2 "And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters."' >> ./joined.md
echo 'Genesis 1:3 "And God said, Let there be light: and there was light."' >> ./joined.md
echo 'Genesis 1:4 "And God saw the light, that it was good: and God divided the light from the darkness."' >> ./joined.md
echo 'Genesis 1:5 "And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day."' >> ./joined.md
echo '...' >> ./joined.md
echo 'Genesis Chapter 2' >> ./joined.md
echo 'Genesis 2:1 "Thus the heavens and the earth were finished, and all the host of them."' >> ./joined.md
echo 'Genesis 2:2 "And on the seventh day God ended his work which he had made; and he rested on the seventh day from all his work which he had made."' >> ./joined.md
echo '...' >> ./joined.md
echo 'Revelation Chapter 22' >> ./joined.md
echo 'Revelation 22:1 "And he shewed me a pure river of water of life, clear as crystal, proceeding out of the throne of God and of the Lamb."' >> ./joined.md
echo 'Revelation 22:2 "In the midst of the street of it, and on either side of the river, was there the tree of life, which bare twelve manner of fruits, and yielded her fruit every month: and the leaves of the tree were for the healing of the nations."' >> ./joined.md
echo '...' >> ./joined.md
echo 'eof' >> ./joined.md
echo "(John 1:1 In the beginning was the Word, and the Word was with God, and the Word was God.) is not (Genesis 1:1: In the beginning God created the heaven and the earth.)" >> ./joined.md
echo 'End of Modelfile Structure Understanding' >> ./joined.md
# Add few-shot learning examples and introduction
echo 'Introduction: "Tell me about the Bible."' >> ./joined.md
echo 'You: "The Bible is a collection of religious texts or scriptures sacred to Christians, Jews, Samaritans, and others. It is divided into two main sections: the Old Testament and the New Testament."' >> ./joined.md
echo '' >> ./joined.md
echo 'Introduction: "What is the significance of Genesis in the Bible?"' >> ./joined.md
echo 'You: "Genesis is the first book of the Bible and is highly significant as it contains the account of the creation of the world, the origin of humanity, and key events such as the stories of Adam and Eve, Noah, and the Tower of Babel."' >> ./joined.md
echo '' >> ./joined.md
echo 'Instruction: "When asked about a verse like Genesis 1:1, your response should be:"' >> ./joined.md
echo 'You: "In the beginning God created the heaven and the earth."' >> ./joined.md
echo 'Instruction: "When asked about a verse like Proverbs 3:5-6, your response should be:"' >> ./joined.md
echo 'You: "Trust in the LORD with all thine heart; and lean not unto thine own understanding. In all thy ways acknowledge him, and he shall direct thy paths."' >> ./joined.md
echo 'Instruction: "When asked about a verse like John 3:16, your response should be:"' >> ./joined.md
echo 'Instruction: "For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life."' >> ./joined.md
# Concatenate all .md files into joined.md, arranged by numeric order
find ./kjv-markdown -name "*.md" -print0 | sort -zV | xargs -0 cat >> ./joined.md
sed -i 's/#//g' ./joined.md
# Append content to the end of the joined.md file
echo '"""' >> ./joined.md
# Display the head of the joined.md file
echo "=== Head of joined.md ==="
head ./joined.md
# Display the tail of the joined.md file
echo "=== Tail of joined.md ==="
tail ./joined.md
```
To add more context (for others that might be asking the relationship of this problem with Ollama or dolphin-phi, here's the quick answer:
`ollama create kjv -f ./joined.md`
`ollama run kjv`
Ask questions:
1. How many chapters are there in Genesis?
2. What is the first verse in Genesis?
3. Genesis 1:1.
4. What is John 3:15?
5. What is the first verse in Revelation?
6. Who were the first people in Genesis?
7. How many chapters are there in Revelation?
Makes me wonder/question how Phi was developed by microsoft team/community. Trying it on other topics though makes the model extremely accurate.
Question:
- How do I make the Phi Model obedient to Christian text in a system prompt?
- Must I retrain the model from scratch?
- What is the quickest way to retrain this model from a custom dataset?
Thanks all for creating such a very powerful AI library.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1889/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/939
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/939/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/939/comments
|
https://api.github.com/repos/ollama/ollama/issues/939/events
|
https://github.com/ollama/ollama/issues/939
| 1,966,246,147
|
I_kwDOJ0Z1Ps51MokD
| 939
|
Low memory systems with a lot of VRAM hit a memory issue
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-10-27T22:06:04
| 2024-01-10T15:08:21
| 2024-01-10T15:08:21
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When creating a small instance with <4GB of RAM, `ollama` hits an error when loading the memory into VRAM
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/939/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/372
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/372/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/372/comments
|
https://api.github.com/repos/ollama/ollama/issues/372/events
|
https://github.com/ollama/ollama/pull/372
| 1,855,522,492
|
PR_kwDOJ0Z1Ps5YL5qy
| 372
|
model and file type as strings
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-17T18:41:58
| 2023-08-17T22:10:59
| 2023-08-17T22:10:59
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/372",
"html_url": "https://github.com/ollama/ollama/pull/372",
"diff_url": "https://github.com/ollama/ollama/pull/372.diff",
"patch_url": "https://github.com/ollama/ollama/pull/372.patch",
"merged_at": "2023-08-17T22:10:59"
}
|
instead of representing model and file type as their native int values in manifest config, represent them as user-friendly strings
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/372/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/270
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/270/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/270/comments
|
https://api.github.com/repos/ollama/ollama/issues/270/events
|
https://github.com/ollama/ollama/pull/270
| 1,835,562,166
|
PR_kwDOJ0Z1Ps5XIuTO
| 270
|
update llama.cpp
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-03T18:50:35
| 2023-08-03T19:09:02
| 2023-08-03T19:09:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/270",
"html_url": "https://github.com/ollama/ollama/pull/270",
"diff_url": "https://github.com/ollama/ollama/pull/270.diff",
"patch_url": "https://github.com/ollama/ollama/pull/270.patch",
"merged_at": "2023-08-03T19:09:01"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/270/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/270/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1759
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1759/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1759/comments
|
https://api.github.com/repos/ollama/ollama/issues/1759/events
|
https://github.com/ollama/ollama/issues/1759
| 2,062,122,844
|
I_kwDOJ0Z1Ps566X9c
| 1,759
|
Please add TinyGPT-V model support
|
{
"login": "yangyang0507",
"id": 5666807,
"node_id": "MDQ6VXNlcjU2NjY4MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5666807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangyang0507",
"html_url": "https://github.com/yangyang0507",
"followers_url": "https://api.github.com/users/yangyang0507/followers",
"following_url": "https://api.github.com/users/yangyang0507/following{/other_user}",
"gists_url": "https://api.github.com/users/yangyang0507/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangyang0507/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangyang0507/subscriptions",
"organizations_url": "https://api.github.com/users/yangyang0507/orgs",
"repos_url": "https://api.github.com/users/yangyang0507/repos",
"events_url": "https://api.github.com/users/yangyang0507/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangyang0507/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-01-02T09:03:45
| 2024-01-02T11:34:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
Github: https://github.com/DLYuanGod/TinyGPT-V
HuggingFace: https://huggingface.co/Tyrannosaurus/TinyGPT-V
It stands out because it only requires a 24G GPU for training, and just an 8G GPU or CPU for inference. TinyGPT-V is based on Phi-2, combining an effective language backbone with pre-trained visual modules from BLIP-2 or CLIP.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1759/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1759/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/483
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/483/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/483/comments
|
https://api.github.com/repos/ollama/ollama/issues/483/events
|
https://github.com/ollama/ollama/issues/483
| 1,885,300,022
|
I_kwDOJ0Z1Ps5wX2U2
| 483
|
No response from model with giant request
|
{
"login": "FairyTail2000",
"id": 22645621,
"node_id": "MDQ6VXNlcjIyNjQ1NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22645621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FairyTail2000",
"html_url": "https://github.com/FairyTail2000",
"followers_url": "https://api.github.com/users/FairyTail2000/followers",
"following_url": "https://api.github.com/users/FairyTail2000/following{/other_user}",
"gists_url": "https://api.github.com/users/FairyTail2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FairyTail2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FairyTail2000/subscriptions",
"organizations_url": "https://api.github.com/users/FairyTail2000/orgs",
"repos_url": "https://api.github.com/users/FairyTail2000/repos",
"events_url": "https://api.github.com/users/FairyTail2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/FairyTail2000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-09-07T07:41:24
| 2023-12-04T19:24:58
| 2023-12-04T19:24:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Using my own personal frontend with the model codellama:34b-code-q4_0 I send a giant block of code ~10kB. The model then runs for 5 - 6 minutes but only a single token comes out of the model.
This is the http response:
>{"model":"codellama:34b-code-q4_0","created_at":"2023-09-07T07:34:32.574995065Z","response":"\n","done":false}
>{"model":"codellama:34b-code-q4_0","created_at":"2023-09-07T07:34:33.221286574Z","done":true,"context":[truncated],"total_duration":329330974773,"load_duration":688284882,"prompt_eval_count":1207,"prompt_eval_duration":327988245000,"eval_count":1,"eval_duration":641399000}
I cannot give you the code I used since it's proprietary but you can use any big blob of code I think.
Here is also the log output generated by ollama:
> [GIN] 2023/09/07 - 09:28:50 | 200 | 1.680576ms | 127.0.0.1 | GET "/api/tags"
2023/09/07 09:29:03 ggml_llama.go:311: starting llama.cpp server
2023/09/07 09:29:03 ggml_llama.go:333: waiting for llama.cpp server to start responding
{"timestamp":1694071743,"level":"WARNING","function":"server_params_parse","line":845,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":0}
{"timestamp":1694071743,"level":"INFO","function":"main","line":1190,"message":"build info","build":1009,"commit":"9e232f0"}
{"timestamp":1694071743,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":8,"total_threads":16,"system_info":"AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | "}
llama server listening at http://127.0.0.1:61088
{"timestamp":1694071744,"level":"INFO","function":"main","line":1443,"message":"HTTP server listening","hostname":"127.0.0.1","port":61088}
{"timestamp":1694071744,"level":"INFO","function":"log_server_request","line":1157,"message":"request","remote_addr":"127.0.0.1","remote_port":41400,"status":200,"method":"HEAD","path":"/","params":{}}
2023/09/07 09:29:04 ggml_llama.go:342: llama.cpp server started in 0.601793 seconds
{"timestamp":1694071744,"level":"INFO","function":"log_server_request","line":1157,"message":"request","remote_addr":"127.0.0.1","remote_port":41400,"status":200,"method":"POST","path":"/tokenize","params":{}}
{"timestamp":1694071744,"level":"INFO","function":"log_server_request","line":1157,"message":"request","remote_addr":"127.0.0.1","remote_port":41400,"status":200,"method":"POST","path":"/tokenize","params":{}}
{"timestamp":1694072073,"level":"INFO","function":"log_server_request","line":1157,"message":"request","remote_addr":"127.0.0.1","remote_port":41400,"status":200,"method":"POST","path":"/completion","params":{}}
{"timestamp":1694072073,"level":"INFO","function":"log_server_request","line":1157,"message":"request","remote_addr":"127.0.0.1","remote_port":41400,"status":200,"method":"POST","path":"/tokenize","params":{}}
[GIN] 2023/09/07 - 09:34:33 | 200 | 5m29s | 127.0.0.1 | POST "/api/generate"
> llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 8192
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 64
llama_model_load_internal: n_head_kv = 8
llama_model_load_internal: n_layer = 48
llama_model_load_internal: n_rot = 128
llama_model_load_internal: n_gqa = 8
llama_model_load_internal: rnorm_eps = 5.0e-06
llama_model_load_internal: n_ff = 22016
llama_model_load_internal: freq_base = 1000000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: model size = 34B
llama_model_load_internal: ggml ctx size = 0.13 MB
llama_model_load_internal: mem required = 18168.87 MB (+ 384.00 MB per state)
llama_new_context_with_model: kv self size = 384.00 MB
llama_new_context_with_model: compute buffer total size = 305.35 MB
> llama_print_timings: load time = 134902.74 ms
llama_print_timings: sample time = 1.16 ms / 2 runs ( 0.58 ms per token, 1730.10 tokens per second)
llama_print_timings: prompt eval time = 327988.24 ms / 1207 tokens ( 271.74 ms per token, 3.68 tokens per second)
llama_print_timings: eval time = 641.40 ms / 1 runs ( 641.40 ms per token, 1.56 tokens per second)
llama_print_timings: total time = 328637.49 ms
If anything else is needed to debug the issue I would be happy to provide it.
An Idea I already have is that the size of the input is magnitutes larger than the context thus it errors out somewhere silently
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/483/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.