url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/7170
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7170/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7170/comments
|
https://api.github.com/repos/ollama/ollama/issues/7170/events
|
https://github.com/ollama/ollama/issues/7170
| 2,580,524,384
|
I_kwDOJ0Z1Ps6Zz61g
| 7,170
|
[Feature request] Support external image URL for Multi Modal Models / Vision LLMs
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-10-11T06:11:29
| 2024-10-11T06:11:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
1. download the image
2. load the image
3. run inference on image 🎉
4. profit 🤑
This is especially useful if you're running ollama on a server and you can't just drag and drop an image
_Ideally_
```
$ ollama run minicpm-v --verbose
>>> https://farmhouseguide.com/wp-content/uploads/2021/08/group-of-llama-ee220513.jpg
Added image './group-of-llama-ee220513.jpg'
The image shows a group of lamas gathered around a water source in an outdoor, mountainous
landscape. There are six animals visible: four white llamas with thick woolly coats and two
reddish-brown guanacos or vicuñas. The setting appears to be high-altitude terrain with sparse
vegetation and rocky ground.
```

| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7170/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7170/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6337
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6337/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6337/comments
|
https://api.github.com/repos/ollama/ollama/issues/6337/events
|
https://github.com/ollama/ollama/issues/6337
| 2,462,980,634
|
I_kwDOJ0Z1Ps6Szhoa
| 6,337
|
Why is the occupancy of my Llama 3 model not high when using the GPU NV T2000, but instead it is computing using the CPU?
|
{
"login": "pewjs",
"id": 40452701,
"node_id": "MDQ6VXNlcjQwNDUyNzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/40452701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pewjs",
"html_url": "https://github.com/pewjs",
"followers_url": "https://api.github.com/users/pewjs/followers",
"following_url": "https://api.github.com/users/pewjs/following{/other_user}",
"gists_url": "https://api.github.com/users/pewjs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pewjs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pewjs/subscriptions",
"organizations_url": "https://api.github.com/users/pewjs/orgs",
"repos_url": "https://api.github.com/users/pewjs/repos",
"events_url": "https://api.github.com/users/pewjs/events{/privacy}",
"received_events_url": "https://api.github.com/users/pewjs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-08-13T10:21:52
| 2024-09-05T22:00:00
| 2024-09-05T21:59:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I use Ollama with Llama 3 or any other model, I find that the GPU usage is constantly fluctuating at high and low levels and is not fully occupied. However, the CPU usage is still approximately 40% high. Various parameters have been enabled but to no avail.

[GIN] 2024/08/13 - 18:11:47 | 200 | 5.2344ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/08/13 - 18:11:47 | 200 | 0s | 127.0.0.1 | GET "/api/version"
time=2024-08-13T18:12:10.197+08:00 level=DEBUG source=gpu.go:362 msg="updating system memory data" before.total="63.7 GiB" before.free="42.7 GiB" before.free_swap="40.3 GiB" now.total="63.7 GiB" now.free="42.4 GiB" now.free_swap="39.9 GiB"
time=2024-08-13T18:12:10.210+08:00 level=DEBUG source=gpu.go:410 msg="updating cuda memory data" gpu=GPU-84808663-ce4d-0d38-31a7-655311eef7b0 name="Quadro T2000" overhead="275.7 MiB" before.total="4.0 GiB" before.free="3.2 GiB" now.total="4.0 GiB" now.free="3.3 GiB" now.used="490.9 MiB"
time=2024-08-13T18:12:10.246+08:00 level=DEBUG source=sched.go:219 msg="loading first model" model=D:\ollama\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
time=2024-08-13T18:12:10.246+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[3.3 GiB]"
time=2024-08-13T18:12:10.247+08:00 level=DEBUG source=server.go:101 msg="system memory" total="63.7 GiB" free="42.4 GiB" free_swap="39.9 GiB"
time=2024-08-13T18:12:10.247+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[3.3 GiB]"
time=2024-08-13T18:12:10.248+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=12 layers.split="" memory.available="[3.3 GiB]" memory.required.full="6.6 GiB" memory.required.partial="3.1 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="692.0 MiB" memory.graph.partial="725.0 MiB"
time=2024-08-13T18:12:10.252+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\pewjs\AppData\Local\Programs\Ollama\ollama_runners\cpu\ollama_llama_server.exe
time=2024-08-13T18:12:10.252+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\pewjs\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx\ollama_llama_server.exe
time=2024-08-13T18:12:10.252+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\pewjs\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe
time=2024-08-13T18:12:10.252+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\pewjs\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe
time=2024-08-13T18:12:10.252+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\pewjs\AppData\Local\Programs\Ollama\ollama_runners\rocm_v6.1\ollama_llama_server.exe
time=2024-08-13T18:12:10.258+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\pewjs\AppData\Local\Programs\Ollama\ollama_runners\cpu\ollama_llama_server.exe
time=2024-08-13T18:12:10.258+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\pewjs\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx\ollama_llama_server.exe
time=2024-08-13T18:12:10.258+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\pewjs\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe
time=2024-08-13T18:12:10.258+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\pewjs\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe
time=2024-08-13T18:12:10.258+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\pewjs\AppData\Local\Programs\Ollama\ollama_runners\rocm_v6.1\ollama_llama_server.exe
time=2024-08-13T18:12:10.310+08:00 level=INFO source=server.go:393 msg="starting llama server" cmd="C:\\Users\\pewjs\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\ollama\\blobs\\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 10240 --batch-size 512 --embedding --log-disable --n-gpu-layers 12 --verbose --no-mmap --parallel 1 --port 1498"
time=2024-08-13T18:12:10.310+08:00 level=DEBUG source=server.go:410 msg=subprocess environment="[CUDA_PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6 CUDA_PATH_V12_3=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.3 CUDA_PATH_V12_6=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6 PATH=C:\\Users\\pewjs\\AppData\\Local\\Programs\\Ollama\\cuda;C:\\Users\\pewjs\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3;C:\\Users\\pewjs\\AppData\\Local\\Programs\\Ollama\\ollama_runners;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\libnvvp;C:\\Program Files\\Common Files\\Oracle\\Java\\javapath;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.3\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.3\\libnvvp;D:\\anaconda3;D:\\anaconda3\\Scripts;C:\\Python27;C:\\Python27\\Scripts;C:\\Program Files\\ImageMagick-7.1.1-Q16-HDRI;C:\\Program Files (x86)\\VMware\\VMware Workstation\\bin\\;C:\\Program Files\\Java\\jdk1.8.0_281\\bin;C:\\Program Files\\Java\\jdk1.8.0_281\\jre\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files (x86)\\Windows Kits\\8.1\\Windows Performance Toolkit\\;C:\\Program Files\\OpenVPN\\bin;C:\\Program Files\\nodejs\\;C:\\Program Files\\dotnet\\;C:\\Users\\pewjs\\AppData\\Local\\Google\\Chrome\\Application;C:\\Program Files\\Git\\cmd;C:\\Program Files (x86)\\PuTTY\\;C:\\Program Files\\Bandizip\\;C:\\Users\\pewjs\\AppData\\Roaming\\FreeControl\\scrcpy-win64-v2.1.1\\;C:\\Users\\pewjs\\AppData\\Roaming\\FreeControl\\scrcpy-win64-v2.3.1\\;C:\\Program Files\\010 Editor;D:\\PHP\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Program Files\\CMake\\bin;D:\\anaconda3\\Library\\bin;D:\\anaconda3\\Library\\mingw-w64\\bin;c:\\;C:\\MinGW\\bin;;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.3.0\\;C:\\Users\\pewjs\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\pewjs\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Program Files (x86)\\Fiddler2;d:\\Program Files (x86)\\Fiddler2;C:\\Users\\pewjs\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Program Files (x86)\\Nmap;C:\\Users\\pewjs\\AppData\\Roaming\\npm;C:\\Users\\pewjs\\AppData\\Local\\Programs\\Microsoft VS Code\\bin;C:\\Program Files\\JetBrains\\IntelliJ IDEA 2023.3.1\\bin;;C:\\Program Files\\JetBrains\\PyCharm 2023.3.2\\bin;;;C:\\Users\\pewjs\\AppData\\Local\\Programs\\Ollama;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\network\\nmap;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\bin;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\android\\dex2jar;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\debuggers\\hyperdbg;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\dotnet\\de4dot;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\ole\\lessmsi;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\ole\\officemalscanner;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\processinspection\\hollowshunter;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\processinspection\\observer;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\processinspection\\pesieve;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\programming\\winpython\\python-3.11.3.amd64;C:\\Users\\pewjs\\AppData\\Local\\Programs\\retoolkit\\utilities\\winapiexec CUDA_VISIBLE_DEVICES=GPU-84808663-ce4d-0d38-31a7-655311eef7b0]"
time=2024-08-13T18:12:10.340+08:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-13T18:12:10.340+08:00 level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
time=2024-08-13T18:12:10.341+08:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3535 commit="1e6f6554" tid="3064" timestamp=1723543930
INFO [wmain] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="3064" timestamp=1723543930 total_threads=12
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="1498" tid="3064" timestamp=1723543930
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from D:\ollama\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-08-13T18:12:10.603+08:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: Quadro T2000, compute capability 7.5, VMM: yes
llm_load_tensors: ggml ctx size = 0.27 MiB
llm_load_tensors: offloading 12 repeating layers to GPU
llm_load_tensors: offloaded 12/33 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 3033.43 MiB
llm_load_tensors: CUDA0 buffer size = 1404.38 MiB
time=2024-08-13T18:12:12.705+08:00 level=DEBUG source=server.go:638 msg="model load progress 0.06"
time=2024-08-13T18:12:12.983+08:00 level=DEBUG source=server.go:638 msg="model load progress 0.30"
time=2024-08-13T18:12:13.249+08:00 level=DEBUG source=server.go:638 msg="model load progress 0.48"
time=2024-08-13T18:12:13.527+08:00 level=DEBUG source=server.go:638 msg="model load progress 0.70"
time=2024-08-13T18:12:13.780+08:00 level=DEBUG source=server.go:638 msg="model load progress 0.86"
llama_new_context_with_model: n_ctx = 10240
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
time=2024-08-13T18:12:14.060+08:00 level=DEBUG source=server.go:638 msg="model load progress 1.00"
llama_kv_cache_init: CUDA_Host KV buffer size = 800.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 480.00 MiB
llama_new_context_with_model: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 725.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 28.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 224
time=2024-08-13T18:12:14.341+08:00 level=DEBUG source=server.go:641 msg="model load completed, waiting for server to become available" status="llm server loading model"
DEBUG [initialize] initializing slots | n_slots=1 tid="3064" timestamp=1723543936
DEBUG [initialize] new slot | n_ctx_slot=10240 slot_id=0 tid="3064" timestamp=1723543936
INFO [wmain] model loaded | tid="3064" timestamp=1723543936
DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="3064" timestamp=1723543936
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=0 tid="3064" timestamp=1723543936
time=2024-08-13T18:12:16.233+08:00 level=INFO source=server.go:632 msg="llama runner started in 5.89 seconds"
time=2024-08-13T18:12:16.233+08:00 level=DEBUG source=sched.go:458 msg="finished setting up runner" model=D:\ollama\blobs\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
time=2024-08-13T18:12:16.233+08:00 level=DEBUG source=routes.go:1361 msg="chat request" images=0 prompt="<|start_header_id|>user<|end_header_id|>\n\n介绍一下大模型的学习方法500字<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=1 tid="3064" timestamp=1723543936
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=2 tid="3064" timestamp=1723543936
DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=19 slot_id=0 task_id=2 tid="3064" timestamp=1723543936
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=2 tid="3064" timestamp=1723543936
PS C:\WINDOWS\system32> nvidia-smi -q -d POWER,TEMPERATURE,PERFORMANCE
==============NVSMI LOG==============
Timestamp : Tue Aug 13 18:17:20 2024
Driver Version : 560.76
CUDA Version : 12.6
Attached GPUs : 1
GPU 00000000:01:00.0
Performance State : P0
Clocks Event Reasons
Idle : Active
Applications Clocks Setting : Not Active
SW Power Cap : Not Active
HW Slowdown : Not Active
HW Thermal Slowdown : Not Active
HW Power Brake Slowdown : Not Active
Sync Boost : Not Active
SW Thermal Slowdown : Not Active
Display Clock Setting : Not Active
Sparse Operation Mode : N/A
Temperature
GPU Current Temp : 65 C
GPU T.Limit Temp : N/A
GPU Shutdown Temp : 98 C
GPU Slowdown Temp : 93 C
GPU Max Operating Temp : 102 C
GPU Target Temperature : 75 C
Memory Current Temp : N/A
Memory Max Operating Temp : N/A
GPU Power Readings
Power Draw : 14.14 W
Current Power Limit : 30.00 W
Requested Power Limit : 35.00 W
Default Power Limit : 35.00 W
Min Power Limit : 1.00 W
Max Power Limit : 35.00 W
Power Samples
Duration : Not Found
Number of Samples : Not Found
Max : Not Found
Min : Not Found
Avg : Not Found
GPU Memory Power Readings
Power Draw : N/A
Module Power Readings
Power Draw : N/A
Current Power Limit : N/A
Requested Power Limit : N/A
Default Power Limit : N/A
Min Power Limit : N/A
Max Power Limit : N/A
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.35
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6337/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7318
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7318/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7318/comments
|
https://api.github.com/repos/ollama/ollama/issues/7318/events
|
https://github.com/ollama/ollama/pull/7318
| 2,605,770,444
|
PR_kwDOJ0Z1Ps5_epIF
| 7,318
|
Add tensors for bitnet/triLMs, Q4_0_x_x
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-22T15:19:39
| 2024-11-26T14:32:55
| 2024-11-26T14:32:54
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7318",
"html_url": "https://github.com/ollama/ollama/pull/7318",
"diff_url": "https://github.com/ollama/ollama/pull/7318.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7318.patch",
"merged_at": null
}
|
Fixes: https://github.com/ollama/ollama/issues/2821
Fixes: https://github.com/ollama/ollama/issues/6125
```console
$ huggingface-cli download --local-dir . 1bitLLM/bitnet_b1_58-large
$ docker run --rm -it -v .:/app/models ghcr.io/ggerganov/llama.cpp:full -c --outtype tq1_0 /app/models
$ docker run --rm -it -v .:/app/models --entrypoint ./llama-quantize ghcr.io/ggerganov/llama.cpp:full /app/models/bitnet_b1_58-large-TQ1_0.gguf /app/models/bitnet_b1_58-large-TQ1_0-requant.gguf tq1_0
$ echo FROM bitnet_b1_58-large-TQ1_0-requant.gguf > Modelfile
$ ollama create bitnet_b1_58:large-TQ1_0
$ ollama run bitnet_b1_58:large-TQ1_0 the sky is blue due to
the earth’s atmosphere, and so our sunlight will get reflected in the atmosphere and reach the earth and burn it.
The Earth’s atmosphere is made of two main elements: nitrogen and oxygen. Nitrogen atoms have a positive charge and an electron (the positively charged nucleus), while oxygen atoms have a negative charge and a electron (the negatively charged
nucleus). When the two are in equilibrium, we experience a light that is mostly blue to violet and has a slight flicker when exposed to ultraviolet radiation.
We’ll be doing a lot of astronomy over here at our house over the next few weeks. As always, feel free to contact us if you have any questions or comments.
```
```console
$ huggingface-cli download --local-dir . pipilok/Llama-3.2-3B-Instruct-Q4_0_4_8-GGUF
$ echo FROM Llama-3.2-3B-Instruct-q4_0_4_8.gguf > Modelfile
$ ollama create llama-3.2:3b-instruct-q4_0_4_8
$ ollama run llama-3.2:3b-instruct-q4_0_4_8 the sky is blue due to
a phenomenon called Rayleigh scattering, which favors shorter wavelengths. This scattering effect causes the red light from sunlight to scatter in all directions, while blue and violet light are scattered more intensely, giving them a more
dispersed appearance.
In addition, the atmosphere contains aerosols like dust, water droplets, and pollutants, which can also interact with light and contribute to its scattering. These interactions can enhance or modify the Rayleigh scattering effect, but they do not
fundamentally change the underlying physics.
So, why does our sky appear blue? It's because of a combination of two main factors:
1. **Rayleigh scattering**: The shorter wavelengths of light (like blue and violet) are scattered more intensely than longer wavelengths (like red), which is why we see a predominantly blue color in the sky.
2. **Aerosols in the atmosphere**: Dust, water droplets, and pollutants can scatter light, contributing to its dispersion and affecting the apparent color of the sky.
Now that you know the science behind the blue sky, appreciate it even more on your next clear day out!
```
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7318/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7318/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8135
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8135/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8135/comments
|
https://api.github.com/repos/ollama/ollama/issues/8135/events
|
https://github.com/ollama/ollama/pull/8135
| 2,744,630,356
|
PR_kwDOJ0Z1Ps6Fejf8
| 8,135
|
Solve problems with Linux, at least Ubuntu 22.04 and 24.04 : Update linux.md
|
{
"login": "ejgutierrez74",
"id": 11474846,
"node_id": "MDQ6VXNlcjExNDc0ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11474846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ejgutierrez74",
"html_url": "https://github.com/ejgutierrez74",
"followers_url": "https://api.github.com/users/ejgutierrez74/followers",
"following_url": "https://api.github.com/users/ejgutierrez74/following{/other_user}",
"gists_url": "https://api.github.com/users/ejgutierrez74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ejgutierrez74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ejgutierrez74/subscriptions",
"organizations_url": "https://api.github.com/users/ejgutierrez74/orgs",
"repos_url": "https://api.github.com/users/ejgutierrez74/repos",
"events_url": "https://api.github.com/users/ejgutierrez74/events{/privacy}",
"received_events_url": "https://api.github.com/users/ejgutierrez74/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-12-17T11:06:24
| 2024-12-23T16:28:09
| 2024-12-23T14:25:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8135",
"html_url": "https://github.com/ollama/ollama/pull/8135",
"diff_url": "https://github.com/ollama/ollama/pull/8135.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8135.patch",
"merged_at": null
}
|
It gave me some errors in Ubuntu 22.04 and 24.04 LTS, so i solved with these little tweaks.
```
dic 17 10:06:23 MiPcLinux systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
dic 17 10:06:23 MiPcLinux systemd[1]: ollama.service: Failed with result 'exit-code'.
dic 17 10:06:26 MiPcLinux systemd[1]: ollama.service: Scheduled restart job, restart counter is at 11.
dic 17 10:06:26 MiPcLinux systemd[1]: Started ollama.service - Ollama Service.
dic 17 10:06:26 MiPcLinux ollama[16491]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
dic 17 10:06:26 MiPcLinux ollama[16491]: Error: open /usr/share/ollama/.ollama/id_ed25519: permission denied
```
So nor ollama or sudo or user could access /usr/share/ollama/.ollama/id_ed25519 Permission denied.
I change permissons
Also added user ollama to gruop user ( $whoami) but i think is not needed.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8135/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7380
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7380/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7380/comments
|
https://api.github.com/repos/ollama/ollama/issues/7380/events
|
https://github.com/ollama/ollama/issues/7380
| 2,616,198,993
|
I_kwDOJ0Z1Ps6b8AdR
| 7,380
|
Unable to run inference from web app
|
{
"login": "MatthewDlr",
"id": 57815261,
"node_id": "MDQ6VXNlcjU3ODE1MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/57815261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MatthewDlr",
"html_url": "https://github.com/MatthewDlr",
"followers_url": "https://api.github.com/users/MatthewDlr/followers",
"following_url": "https://api.github.com/users/MatthewDlr/following{/other_user}",
"gists_url": "https://api.github.com/users/MatthewDlr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MatthewDlr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MatthewDlr/subscriptions",
"organizations_url": "https://api.github.com/users/MatthewDlr/orgs",
"repos_url": "https://api.github.com/users/MatthewDlr/repos",
"events_url": "https://api.github.com/users/MatthewDlr/events{/privacy}",
"received_events_url": "https://api.github.com/users/MatthewDlr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-10-26T23:03:08
| 2024-10-28T00:42:05
| 2024-10-28T00:42:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I recently tried to host [this project](https://github.com/jakobhoeg/nextjs-ollama-llm-ui?tab=readme-ov-file) to have a better UI to run ollama.
The app successfully gets the tags at `/api/tags` but however, when I try to send a chat using `/api/chats`, the request is being rejected, and I don't know why.


In ollama logs, I can see the tags request put no trace of the chat request

FYI, I tried every configuration of `OLLAMA_HOST` and `OLLAMA_ORIGINS`, I restarted the app multiple times, without success.
Is it a bug, or just something I do wrong?
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.14
|
{
"login": "MatthewDlr",
"id": 57815261,
"node_id": "MDQ6VXNlcjU3ODE1MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/57815261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MatthewDlr",
"html_url": "https://github.com/MatthewDlr",
"followers_url": "https://api.github.com/users/MatthewDlr/followers",
"following_url": "https://api.github.com/users/MatthewDlr/following{/other_user}",
"gists_url": "https://api.github.com/users/MatthewDlr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MatthewDlr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MatthewDlr/subscriptions",
"organizations_url": "https://api.github.com/users/MatthewDlr/orgs",
"repos_url": "https://api.github.com/users/MatthewDlr/repos",
"events_url": "https://api.github.com/users/MatthewDlr/events{/privacy}",
"received_events_url": "https://api.github.com/users/MatthewDlr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7380/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8668
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8668/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8668/comments
|
https://api.github.com/repos/ollama/ollama/issues/8668/events
|
https://github.com/ollama/ollama/pull/8668
| 2,818,695,620
|
PR_kwDOJ0Z1Ps6JYzfR
| 8,668
|
Hide empty terminal window
|
{
"login": "ashokgelal",
"id": 401055,
"node_id": "MDQ6VXNlcjQwMTA1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/401055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashokgelal",
"html_url": "https://github.com/ashokgelal",
"followers_url": "https://api.github.com/users/ashokgelal/followers",
"following_url": "https://api.github.com/users/ashokgelal/following{/other_user}",
"gists_url": "https://api.github.com/users/ashokgelal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashokgelal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashokgelal/subscriptions",
"organizations_url": "https://api.github.com/users/ashokgelal/orgs",
"repos_url": "https://api.github.com/users/ashokgelal/repos",
"events_url": "https://api.github.com/users/ashokgelal/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashokgelal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-29T16:31:07
| 2025-01-29T16:31:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8668",
"html_url": "https://github.com/ollama/ollama/pull/8668",
"diff_url": "https://github.com/ollama/ollama/pull/8668.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8668.patch",
"merged_at": null
}
|
This hides the LlamaServer blank window when chatting outside of the terminal (say like with an app like Msty). This has no other side effects when invoking it the regular way.
I had sent a PR for this a while ago and it was closed thinking it had been resolved but this issue still exists. (see: https://github.com/ollama/ollama/pull/4287)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8668/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4340
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4340/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4340/comments
|
https://api.github.com/repos/ollama/ollama/issues/4340/events
|
https://github.com/ollama/ollama/issues/4340
| 2,290,642,869
|
I_kwDOJ0Z1Ps6IiG-1
| 4,340
|
how can I make ollama always run models?
|
{
"login": "zhaoyuchen1128",
"id": 167266669,
"node_id": "U_kgDOCfhJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/167266669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaoyuchen1128",
"html_url": "https://github.com/zhaoyuchen1128",
"followers_url": "https://api.github.com/users/zhaoyuchen1128/followers",
"following_url": "https://api.github.com/users/zhaoyuchen1128/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaoyuchen1128/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaoyuchen1128/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaoyuchen1128/subscriptions",
"organizations_url": "https://api.github.com/users/zhaoyuchen1128/orgs",
"repos_url": "https://api.github.com/users/zhaoyuchen1128/repos",
"events_url": "https://api.github.com/users/zhaoyuchen1128/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaoyuchen1128/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-11T03:54:46
| 2024-07-25T18:56:47
| 2024-07-25T18:56:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
If the model does not run for a while, the model will stop and reloading will consume a lot of time.So the user experience is not good
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4340/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4716
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4716/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4716/comments
|
https://api.github.com/repos/ollama/ollama/issues/4716/events
|
https://github.com/ollama/ollama/issues/4716
| 2,324,901,328
|
I_kwDOJ0Z1Ps6Kky3Q
| 4,716
|
An error occurred while creating modelfile file
|
{
"login": "wuuudong",
"id": 154340094,
"node_id": "U_kgDOCTMK_g",
"avatar_url": "https://avatars.githubusercontent.com/u/154340094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wuuudong",
"html_url": "https://github.com/wuuudong",
"followers_url": "https://api.github.com/users/wuuudong/followers",
"following_url": "https://api.github.com/users/wuuudong/following{/other_user}",
"gists_url": "https://api.github.com/users/wuuudong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wuuudong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wuuudong/subscriptions",
"organizations_url": "https://api.github.com/users/wuuudong/orgs",
"repos_url": "https://api.github.com/users/wuuudong/repos",
"events_url": "https://api.github.com/users/wuuudong/events{/privacy}",
"received_events_url": "https://api.github.com/users/wuuudong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-30T07:20:04
| 2024-05-30T16:22:32
| 2024-05-30T16:22:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I used the 4-bit quantized chatglm3-6b file to create the Modelfile with the following information:
C:\Windows\system32>ollama create example -f E:\LLM\chatglm.cpp\models\chatglm3.Modelfile
transferring model data
Error: unsupported content type: unknown
modelfile file Settings are as follows:
FROM ./chatglm3-ggml-q4.bin
TEMPLATE "[INST] {{ .Prompt }} [/INST]"
What is the reason?
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.38
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4716/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5888
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5888/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5888/comments
|
https://api.github.com/repos/ollama/ollama/issues/5888/events
|
https://github.com/ollama/ollama/pull/5888
| 2,426,064,565
|
PR_kwDOJ0Z1Ps52Qw1t
| 5,888
|
Update gpu.md: Add RTX 3050 Ti and RTX 3050 Ti
|
{
"login": "bean5",
"id": 2052646,
"node_id": "MDQ6VXNlcjIwNTI2NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2052646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bean5",
"html_url": "https://github.com/bean5",
"followers_url": "https://api.github.com/users/bean5/followers",
"following_url": "https://api.github.com/users/bean5/following{/other_user}",
"gists_url": "https://api.github.com/users/bean5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bean5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bean5/subscriptions",
"organizations_url": "https://api.github.com/users/bean5/orgs",
"repos_url": "https://api.github.com/users/bean5/repos",
"events_url": "https://api.github.com/users/bean5/events{/privacy}",
"received_events_url": "https://api.github.com/users/bean5/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-07-23T20:24:17
| 2024-09-05T22:08:41
| 2024-09-05T18:24:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5888",
"html_url": "https://github.com/ollama/ollama/pull/5888",
"diff_url": "https://github.com/ollama/ollama/pull/5888.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5888.patch",
"merged_at": "2024-09-05T18:24:26"
}
|
Seems strange that the laptop versions of 3050 and 3050 Ti would be supported but not the non-notebook, but this is what the page (https://developer.nvidia.com/cuda-gpus) says.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5888/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4391
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4391/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4391/comments
|
https://api.github.com/repos/ollama/ollama/issues/4391/events
|
https://github.com/ollama/ollama/issues/4391
| 2,292,005,499
|
I_kwDOJ0Z1Ps6InTp7
| 4,391
|
pre-built binary doesn't work on Jeston with JP6 GA system
|
{
"login": "TadayukiOkada",
"id": 51673480,
"node_id": "MDQ6VXNlcjUxNjczNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/51673480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TadayukiOkada",
"html_url": "https://github.com/TadayukiOkada",
"followers_url": "https://api.github.com/users/TadayukiOkada/followers",
"following_url": "https://api.github.com/users/TadayukiOkada/following{/other_user}",
"gists_url": "https://api.github.com/users/TadayukiOkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TadayukiOkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TadayukiOkada/subscriptions",
"organizations_url": "https://api.github.com/users/TadayukiOkada/orgs",
"repos_url": "https://api.github.com/users/TadayukiOkada/repos",
"events_url": "https://api.github.com/users/TadayukiOkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/TadayukiOkada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-05-13T07:07:05
| 2024-05-31T22:01:27
| 2024-05-31T22:01:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I get this error if I run the pre-built binary on Jetson Orin with JP6 GA system installed:
`source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) CUDA error: CUBLAS_STATUS_EXECUTION_FAILED\n current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:1848\n cublasGemmBatchedEx(ctx.cublas_handle(), CUBLAS_OP_T, CUBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), CUDA_R_16F, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), CUDA_R_16F, nb11/nb10, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne01, ne23, cu_compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)\nGGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !\"CUDA error\""`
I built ollama from source and it runs fine on JP6 system. Also, pre-built binaries were working on JP5.1.3 system.
### OS
Linux
### GPU
Nvidia
### CPU
Other
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4391/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4343
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4343/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4343/comments
|
https://api.github.com/repos/ollama/ollama/issues/4343/events
|
https://github.com/ollama/ollama/issues/4343
| 2,290,663,340
|
I_kwDOJ0Z1Ps6IiL-s
| 4,343
|
windows10:V0.1.35 -The API interface of openai faill! but V0.1.34 is ok!
|
{
"login": "808cn",
"id": 13846472,
"node_id": "MDQ6VXNlcjEzODQ2NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/13846472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/808cn",
"html_url": "https://github.com/808cn",
"followers_url": "https://api.github.com/users/808cn/followers",
"following_url": "https://api.github.com/users/808cn/following{/other_user}",
"gists_url": "https://api.github.com/users/808cn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/808cn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/808cn/subscriptions",
"organizations_url": "https://api.github.com/users/808cn/orgs",
"repos_url": "https://api.github.com/users/808cn/repos",
"events_url": "https://api.github.com/users/808cn/events{/privacy}",
"received_events_url": "https://api.github.com/users/808cn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-05-11T04:47:04
| 2024-06-02T00:25:20
| 2024-06-02T00:25:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
V0.1.35: windows10,openai api fail!
-----------------------------------------------------
Version 0.1.35: The API interface of openai cannot be used.
It is OK to return to 0.1.34.
Now using version 0.1.34, the openai interface can be used normally.
------------------------------------------------
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.35
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4343/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6201
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6201/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6201/comments
|
https://api.github.com/repos/ollama/ollama/issues/6201/events
|
https://github.com/ollama/ollama/pull/6201
| 2,450,939,781
|
PR_kwDOJ0Z1Ps53kyTB
| 6,201
|
feat: add support for running ollama on rocm in wsl
|
{
"login": "evshiron",
"id": 8800643,
"node_id": "MDQ6VXNlcjg4MDA2NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8800643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evshiron",
"html_url": "https://github.com/evshiron",
"followers_url": "https://api.github.com/users/evshiron/followers",
"following_url": "https://api.github.com/users/evshiron/following{/other_user}",
"gists_url": "https://api.github.com/users/evshiron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/evshiron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evshiron/subscriptions",
"organizations_url": "https://api.github.com/users/evshiron/orgs",
"repos_url": "https://api.github.com/users/evshiron/repos",
"events_url": "https://api.github.com/users/evshiron/events{/privacy}",
"received_events_url": "https://api.github.com/users/evshiron/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 8
| 2024-08-06T13:47:10
| 2025-01-15T22:24:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6201",
"html_url": "https://github.com/ollama/ollama/pull/6201",
"diff_url": "https://github.com/ollama/ollama/pull/6201.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6201.patch",
"merged_at": null
}
|
Allow running Ollama on ROCm in WSL by calling HIP functions instead of querying sysfs.
The `amd_hip_linux.go` was duplicated from `amd_hip_windows.go`, `windows.LoadLibrary` and `syscall.SyscallN` are replaced with CGO and `dlfcn.h`, to avoid depending on the HIP runtime directly.
Finally, I add an alternative routine for `RocmGPUInfo`: if existing method could not find any AMD GPUs, it will give the new method a try.
Please note, that the code changes haven't been tested outside of WSL.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6201/reactions",
"total_count": 16,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6201/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6254
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6254/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6254/comments
|
https://api.github.com/repos/ollama/ollama/issues/6254/events
|
https://github.com/ollama/ollama/issues/6254
| 2,455,012,420
|
I_kwDOJ0Z1Ps6SVIRE
| 6,254
|
Lumina-mGPT support
|
{
"login": "Amazon90",
"id": 72290820,
"node_id": "MDQ6VXNlcjcyMjkwODIw",
"avatar_url": "https://avatars.githubusercontent.com/u/72290820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Amazon90",
"html_url": "https://github.com/Amazon90",
"followers_url": "https://api.github.com/users/Amazon90/followers",
"following_url": "https://api.github.com/users/Amazon90/following{/other_user}",
"gists_url": "https://api.github.com/users/Amazon90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Amazon90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Amazon90/subscriptions",
"organizations_url": "https://api.github.com/users/Amazon90/orgs",
"repos_url": "https://api.github.com/users/Amazon90/repos",
"events_url": "https://api.github.com/users/Amazon90/events{/privacy}",
"received_events_url": "https://api.github.com/users/Amazon90/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 3
| 2024-08-08T06:51:04
| 2024-08-08T19:36:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[Lumina-mGPT](https://github.com/Alpha-VLLM/Lumina-mGPT)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6254/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1448
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1448/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1448/comments
|
https://api.github.com/repos/ollama/ollama/issues/1448/events
|
https://github.com/ollama/ollama/issues/1448
| 2,034,042,119
|
I_kwDOJ0Z1Ps55PQUH
| 1,448
|
Pytorch model quantization, using ollama/quantize docker is not working.
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-12-09T20:31:24
| 2024-02-21T11:24:44
| 2024-02-20T01:21:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
(base) alexo@GrayMatters:/opt/data/data/Salesforce/codegen25-7b-mono$ docker run --rm -v .:/model -v .:/workdir ollama/quantize -q q6_K ./
sh: 0: cannot open entrypoint.sh: No such file
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1448/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/1448/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/943
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/943/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/943/comments
|
https://api.github.com/repos/ollama/ollama/issues/943/events
|
https://github.com/ollama/ollama/pull/943
| 1,966,739,516
|
PR_kwDOJ0Z1Ps5eCTge
| 943
|
doc: categorised community integrations + added ollama-webui
|
{
"login": "tjbck",
"id": 25473318,
"node_id": "MDQ6VXNlcjI1NDczMzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/25473318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tjbck",
"html_url": "https://github.com/tjbck",
"followers_url": "https://api.github.com/users/tjbck/followers",
"following_url": "https://api.github.com/users/tjbck/following{/other_user}",
"gists_url": "https://api.github.com/users/tjbck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tjbck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tjbck/subscriptions",
"organizations_url": "https://api.github.com/users/tjbck/orgs",
"repos_url": "https://api.github.com/users/tjbck/repos",
"events_url": "https://api.github.com/users/tjbck/events{/privacy}",
"received_events_url": "https://api.github.com/users/tjbck/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-28T21:04:26
| 2023-11-06T19:35:39
| 2023-11-06T19:35:39
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/943",
"html_url": "https://github.com/ollama/ollama/pull/943",
"diff_url": "https://github.com/ollama/ollama/pull/943.diff",
"patch_url": "https://github.com/ollama/ollama/pull/943.patch",
"merged_at": "2023-11-06T19:35:39"
}
|
Just found out there was a community integrations section in the README.md file.
I categorised the integrations into separate groups for better legibility and also added the [ollama-webui](https://github.com/ollama-webui/ollama-webui) project to the GUI list.
Thanks!
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/943/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7066
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7066/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7066/comments
|
https://api.github.com/repos/ollama/ollama/issues/7066/events
|
https://github.com/ollama/ollama/pull/7066
| 2,559,856,753
|
PR_kwDOJ0Z1Ps59Sd76
| 7,066
|
llama: Add CI to verify all vendored changes have patches
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-10-01T17:54:45
| 2024-10-01T18:16:15
| 2024-10-01T18:16:10
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7066",
"html_url": "https://github.com/ollama/ollama/pull/7066",
"diff_url": "https://github.com/ollama/ollama/pull/7066.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7066.patch",
"merged_at": "2024-10-01T18:16:10"
}
|
With the new vendoring model we want to make sure we don't accidentally merge changes in the vendored code without having those changes covered by a patch that applies cleanly on the current baseline.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7066/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1900
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1900/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1900/comments
|
https://api.github.com/repos/ollama/ollama/issues/1900/events
|
https://github.com/ollama/ollama/issues/1900
| 2,074,578,480
|
I_kwDOJ0Z1Ps57p44w
| 1,900
|
set parameter stop in repl removes other stop words
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-01-10T15:09:03
| 2024-05-10T00:57:54
| 2024-05-10T00:57:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
if i am in the repl and I type `/set parameter stop <|system>` all other stop words are removed. I just wanted to add one.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1900/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5764
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5764/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5764/comments
|
https://api.github.com/repos/ollama/ollama/issues/5764/events
|
https://github.com/ollama/ollama/issues/5764
| 2,415,872,249
|
I_kwDOJ0Z1Ps6P_0j5
| 5,764
|
Error: llama runner process has terminated: exit status 0xc0000409 error loading model: unable to allocate backend buffer
|
{
"login": "mohibovais79",
"id": 89134017,
"node_id": "MDQ6VXNlcjg5MTM0MDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/89134017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohibovais79",
"html_url": "https://github.com/mohibovais79",
"followers_url": "https://api.github.com/users/mohibovais79/followers",
"following_url": "https://api.github.com/users/mohibovais79/following{/other_user}",
"gists_url": "https://api.github.com/users/mohibovais79/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mohibovais79/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohibovais79/subscriptions",
"organizations_url": "https://api.github.com/users/mohibovais79/orgs",
"repos_url": "https://api.github.com/users/mohibovais79/repos",
"events_url": "https://api.github.com/users/mohibovais79/events{/privacy}",
"received_events_url": "https://api.github.com/users/mohibovais79/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 9
| 2024-07-18T09:40:40
| 2024-08-08T18:00:46
| 2024-08-08T18:00:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when i try to run this command ollama run gemma2 this error shows up.
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.2.5
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5764/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7196
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7196/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7196/comments
|
https://api.github.com/repos/ollama/ollama/issues/7196/events
|
https://github.com/ollama/ollama/issues/7196
| 2,585,095,422
|
I_kwDOJ0Z1Ps6aFWz-
| 7,196
|
Model Push Successful but Ignored by Ollama Registry - Cannot Pull Model After Push
|
{
"login": "jimin0",
"id": 86674074,
"node_id": "MDQ6VXNlcjg2Njc0MDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/86674074?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimin0",
"html_url": "https://github.com/jimin0",
"followers_url": "https://api.github.com/users/jimin0/followers",
"following_url": "https://api.github.com/users/jimin0/following{/other_user}",
"gists_url": "https://api.github.com/users/jimin0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimin0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimin0/subscriptions",
"organizations_url": "https://api.github.com/users/jimin0/orgs",
"repos_url": "https://api.github.com/users/jimin0/repos",
"events_url": "https://api.github.com/users/jimin0/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimin0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-10-14T07:40:17
| 2024-12-03T20:01:45
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After successfully pushing a model to the Ollama registry using `ollama push`, the model seems to be ignored by the Ollama service. I cannot pull the model from the registry, and the service reports that "**No models have been pushed**" when accessing the registry URL.
This issue persists even though the push operation completes successfully. When attempting to pull the model afterward, the following error occurs:
```
Error: pull model manifest: file does not exist
```
### Steps to Reproduce:
1. **Pushing a Model:**
I pushed the model using the following command:
```bash
ollama push bona/bge_m3_korean:latest
```
The push operation completed successfully:
```
retrieving manifest
pushing 61bb0c982884... 100% ▕█████████████████████████████████████████████████████████████████████▏ 1.2 GB
pushing 578a2e81f706... 100% ▕█████████████████████████████████████████████████████████████████████▏ 95 B
```
2. **Listing Models Locally:**
After the push, I confirmed the model exists locally using `ollama list`:
```bash
ollama list
```
Output:
```
NAME ID SIZE MODIFIED
bona/bge_m3_korean:latest 949236422f50 1.2 GB 2 minutes ago
```
3. **Pulling the Model from the Registry:**
When trying to pull the model after pushing it:
```bash
ollama pull bona/bge_m3_korean:latest
```
I get the following error:
```
Error: pull model manifest:
file does not exist
```
4. **Checking the Model on the Ollama Registry:**
When I visit the registry URL (https://ollama.com/bona/bge-m3-korean), it shows:
```
No models have been pushed.
```
### Environment:
- **Ollama version:** 0.3.12
HOW CAN I SOLVE IT?
### OS
WSL2
### GPU
AMD, Other
### CPU
Intel
### Ollama version
0.3.12
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7196/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4838
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4838/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4838/comments
|
https://api.github.com/repos/ollama/ollama/issues/4838/events
|
https://github.com/ollama/ollama/issues/4838
| 2,336,354,937
|
I_kwDOJ0Z1Ps6LQfJ5
| 4,838
|
/api/ps shows Start of CE 'modified_at'
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-06-05T16:30:19
| 2024-06-05T18:19:53
| 2024-06-05T18:19:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Should not return the field
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4838/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6089
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6089/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6089/comments
|
https://api.github.com/repos/ollama/ollama/issues/6089/events
|
https://github.com/ollama/ollama/issues/6089
| 2,439,027,783
|
I_kwDOJ0Z1Ps6RYJxH
| 6,089
|
Match behavior of text-generation webui and koboldcpp by accepting requests to v1/completions that don't specify the model.
|
{
"login": "balisujohn",
"id": 20377292,
"node_id": "MDQ6VXNlcjIwMzc3Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/20377292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balisujohn",
"html_url": "https://github.com/balisujohn",
"followers_url": "https://api.github.com/users/balisujohn/followers",
"following_url": "https://api.github.com/users/balisujohn/following{/other_user}",
"gists_url": "https://api.github.com/users/balisujohn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balisujohn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balisujohn/subscriptions",
"organizations_url": "https://api.github.com/users/balisujohn/orgs",
"repos_url": "https://api.github.com/users/balisujohn/repos",
"events_url": "https://api.github.com/users/balisujohn/events{/privacy}",
"received_events_url": "https://api.github.com/users/balisujohn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-31T03:22:59
| 2024-07-31T17:44:39
| 2024-07-31T17:38:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This works:
````
url = "http://localhost:11434/v1/completions"
headers = {
'Content-Type': 'application/json'
}
data = {
'model':"moondream",
'prompt': "What is the cat holding?",
'max_tokens': 20,
'temperature': 1,
'top_p': 0.9,
'seed': 10
}
# Convert data to JSON format
json_data = json.dumps(data).encode('utf-8')
# Create a request object with the URL, data, and headers
request = urllib.request.Request(url, data=json_data, headers=headers, method='POST')
````
While this results in a 400 error
````
import urllib.request
import urllib.parse
import json
url = "http://localhost:11434/v1/completions"
headers = {
'Content-Type': 'application/json'
}
data = {
'prompt': "What is the cat holding?",
'max_tokens': 20,
'temperature': 1,
'top_p': 0.9,
'seed': 10
}
# Convert data to JSON format
json_data = json.dumps(data).encode('utf-8')
# Create a request object with the URL, data, and headers
request = urllib.request.Request(url, data=json_data, headers=headers, method='POST')
# Send the request and read the response
with urllib.request.urlopen(request) as response:
response_data = response.read()
# If needed, decode the response data
response = json.loads(response_data.decode('utf-8'))
new_text = response["choices"][0]["text"]
print(new_text)
````
If possible, I would like for the second case to work, defaulting to one of the models ollama is currently running (Doesn't matter which).
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6089/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1103
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1103/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1103/comments
|
https://api.github.com/repos/ollama/ollama/issues/1103/events
|
https://github.com/ollama/ollama/issues/1103
| 1,989,574,177
|
I_kwDOJ0Z1Ps52ln4h
| 1,103
|
Custom model repeats context in the response
|
{
"login": "sethmbhele",
"id": 4163455,
"node_id": "MDQ6VXNlcjQxNjM0NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4163455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sethmbhele",
"html_url": "https://github.com/sethmbhele",
"followers_url": "https://api.github.com/users/sethmbhele/followers",
"following_url": "https://api.github.com/users/sethmbhele/following{/other_user}",
"gists_url": "https://api.github.com/users/sethmbhele/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sethmbhele/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sethmbhele/subscriptions",
"organizations_url": "https://api.github.com/users/sethmbhele/orgs",
"repos_url": "https://api.github.com/users/sethmbhele/repos",
"events_url": "https://api.github.com/users/sethmbhele/events{/privacy}",
"received_events_url": "https://api.github.com/users/sethmbhele/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-11-12T21:00:35
| 2023-11-19T16:48:28
| 2023-11-19T16:48:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello Friends
Firstly thank you so much for this amazing project. I have been playing around with it and having quite the blast learning the ins and outs of Ollama. If anyone can kindly assist with a challenge I am currently facing:
I created a Modelfile and passed temperature and system message; created and ran custom model. Everything works great and the new model is responding according to the new system message shared in the Modelfile. The challenge that I am facing is that on the second or third multi-turn chat, I am getting response + entire system message appended at the end. Any ideas on what I can try to fix this behaviour?
PS: Passing an instruction in the system message to NOT repeat system message in response did not work :(
|
{
"login": "sethmbhele",
"id": 4163455,
"node_id": "MDQ6VXNlcjQxNjM0NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4163455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sethmbhele",
"html_url": "https://github.com/sethmbhele",
"followers_url": "https://api.github.com/users/sethmbhele/followers",
"following_url": "https://api.github.com/users/sethmbhele/following{/other_user}",
"gists_url": "https://api.github.com/users/sethmbhele/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sethmbhele/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sethmbhele/subscriptions",
"organizations_url": "https://api.github.com/users/sethmbhele/orgs",
"repos_url": "https://api.github.com/users/sethmbhele/repos",
"events_url": "https://api.github.com/users/sethmbhele/events{/privacy}",
"received_events_url": "https://api.github.com/users/sethmbhele/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1103/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5996
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5996/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5996/comments
|
https://api.github.com/repos/ollama/ollama/issues/5996/events
|
https://github.com/ollama/ollama/pull/5996
| 2,432,964,797
|
PR_kwDOJ0Z1Ps52npDv
| 5,996
|
Add charla project to Terminal section
|
{
"login": "yaph",
"id": 60051,
"node_id": "MDQ6VXNlcjYwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/60051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaph",
"html_url": "https://github.com/yaph",
"followers_url": "https://api.github.com/users/yaph/followers",
"following_url": "https://api.github.com/users/yaph/following{/other_user}",
"gists_url": "https://api.github.com/users/yaph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaph/subscriptions",
"organizations_url": "https://api.github.com/users/yaph/orgs",
"repos_url": "https://api.github.com/users/yaph/repos",
"events_url": "https://api.github.com/users/yaph/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaph/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-07-26T20:53:55
| 2024-09-09T21:07:13
| 2024-09-09T21:06:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5996",
"html_url": "https://github.com/ollama/ollama/pull/5996",
"diff_url": "https://github.com/ollama/ollama/pull/5996.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5996.patch",
"merged_at": null
}
|
Charla is a simple terminal based chat application that works with local language models. I'd appreciate if you consider it as an example project.
|
{
"login": "yaph",
"id": 60051,
"node_id": "MDQ6VXNlcjYwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/60051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaph",
"html_url": "https://github.com/yaph",
"followers_url": "https://api.github.com/users/yaph/followers",
"following_url": "https://api.github.com/users/yaph/following{/other_user}",
"gists_url": "https://api.github.com/users/yaph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaph/subscriptions",
"organizations_url": "https://api.github.com/users/yaph/orgs",
"repos_url": "https://api.github.com/users/yaph/repos",
"events_url": "https://api.github.com/users/yaph/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaph/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5996/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1794
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1794/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1794/comments
|
https://api.github.com/repos/ollama/ollama/issues/1794/events
|
https://github.com/ollama/ollama/issues/1794
| 2,066,598,674
|
I_kwDOJ0Z1Ps57LcsS
| 1,794
|
"This model requires you to add a jpeg, png, or svg image" error on native windows build
|
{
"login": "prabirshrestha",
"id": 287744,
"node_id": "MDQ6VXNlcjI4Nzc0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/287744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prabirshrestha",
"html_url": "https://github.com/prabirshrestha",
"followers_url": "https://api.github.com/users/prabirshrestha/followers",
"following_url": "https://api.github.com/users/prabirshrestha/following{/other_user}",
"gists_url": "https://api.github.com/users/prabirshrestha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prabirshrestha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabirshrestha/subscriptions",
"organizations_url": "https://api.github.com/users/prabirshrestha/orgs",
"repos_url": "https://api.github.com/users/prabirshrestha/repos",
"events_url": "https://api.github.com/users/prabirshrestha/events{/privacy}",
"received_events_url": "https://api.github.com/users/prabirshrestha/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-01-05T01:46:18
| 2024-01-07T17:05:47
| 2024-01-07T17:05:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have compiled the ollama as a native windows binary and have been able to load and run models.
When running llava model. I get an error.
```bat
ollama run llava
```
```
>>> describe this image c:\download.jpeg
describe this image D:\code\download.jpeg
This model requires you to add a jpeg, png, or
svg image.
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1794/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2211
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2211/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2211/comments
|
https://api.github.com/repos/ollama/ollama/issues/2211/events
|
https://github.com/ollama/ollama/issues/2211
| 2,102,653,728
|
I_kwDOJ0Z1Ps59U_Mg
| 2,211
|
Mistral v0.2 hangs after repeatedly writing same token
|
{
"login": "arch-user-france1",
"id": 72965843,
"node_id": "MDQ6VXNlcjcyOTY1ODQz",
"avatar_url": "https://avatars.githubusercontent.com/u/72965843?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arch-user-france1",
"html_url": "https://github.com/arch-user-france1",
"followers_url": "https://api.github.com/users/arch-user-france1/followers",
"following_url": "https://api.github.com/users/arch-user-france1/following{/other_user}",
"gists_url": "https://api.github.com/users/arch-user-france1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arch-user-france1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arch-user-france1/subscriptions",
"organizations_url": "https://api.github.com/users/arch-user-france1/orgs",
"repos_url": "https://api.github.com/users/arch-user-france1/repos",
"events_url": "https://api.github.com/users/arch-user-france1/events{/privacy}",
"received_events_url": "https://api.github.com/users/arch-user-france1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-01-26T18:02:03
| 2024-03-12T22:50:45
| 2024-03-12T22:50:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
<img width="1446" alt="grafik" src="https://github.com/ollama/ollama/assets/72965843/daff8519-4262-46f1-b52d-d11b246355b4">
```bash
➜ ~ ollama ls
NAME ID SIZE MODIFIED
mistral:v0.2 61e88e884507 4.1 GB 2 days ago
```
Crashed, the ollama runner using 300 MB of RAM without active CPU or GPU usage.
Running on MPS. Conversation can not be continued anymore, ollama's text generation API hangs:
```bash
Looking forward to our interaction! Let me know if you have any specific requests or questions. Cheers! 🤘🏼💻💪🏼🌠🚀✨🔬🧩👨💻👩💻🐶😸🌹🏛️🌅🌄🌈🌊🏖️🚴🏼♂️🚶🏼♀️🚗🚧🚢🚤🚣🏻♂️🚣🏻♀️🛳️🚤🚢🛵🏋️🎾🤽🏻♂️🤸🏾♀️🧞🏽🧜🏼♂️🧜🏼♀️🦇🐍🕷️🕸️🌼🌱🌺🍎🍌🍊🍌🤘🏼💻💪🏼🌠🚀✨🔬🧩👨💻👩💻🐶😸🌹🏛️🌅🌄🌈🌊🏖️🚴🏼♂️🚶🏼♀️🚗🚧🚢🚤🚣🏻♂️🚣🏻♀️🛳️🚤🚢🛵🏋️🎾🤽🏻♂️🤸🏾♀️🧞🏽🧜🏼♂️🧜🏼♀️🦇🐍🕷️🕸️🌼🌱🌺🍎🍌🍊🍌🍇🥝🤴🏻🤴🏼👸🏽👸🏼🦄🐲🐉🐉🦋🐘🐘🧁🧁🎂🍰🥧🧁🧀🧀🌭🌮🍔🍟🍕🍖🍲🍱🥙🏴🇩🇪🇫🇷🇯🇵🇹🇼🇨🇭🇦🇹🇱🇻🇵🇸🇪🇸🇮🇹🇬🇧🇯🇴🇩🇿🇨🇼🇪🇪🇱🇾🇲🇪🇷🇸🇭🇷🇹🇹🇮🇸🇬🇧🇵🇺🇿🇦🇫🇷🇨🇩🇳🇱🇭🇷🇭🇾🇳🇴🇩🇰🇪🇸🇮🇸🇫🇷🇩🇰🇧🇪🇬🇧🇯🇵🇦🇺🇨
🇭🇩🇿🇱🇹🇲🇦🇷🇸🇫🇷🇧🇬🇮🇳🇪🇸🇭🇷🇲🇨🇳🇱🇯🇵🇩🇿🇩🇰🇪🇸🇫🇷🇦🇺🇹🇼🇧🇬🇭🇳🇱🇮🇶🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🥝🤴🏻🤴🏼👸🏽👸🏼🦄🐲🐉🐉🦋🐘🐘🧁🧁🎂🍰🥧🧁🧀🧀🌭🌮🍔🍟🍕🍖🍲🍱🥙🏴🇩🇪🇫🇷🇯🇵🇹🇼🇨🇭🇦🇹🇱🇻🇵🇸🇪🇸🇮🇹🇬🇧🇯🇴🇩🇿🇨🇼🇪🇪🇱🇾🇲🇪🇷🇸🇭🇷🇹🇹🇮🇸🇬🇧🇵🇺🇿🇦🇫🇷🇨🇩🇳🇱🇭🇷🇭🇾🇳🇴🇩🇰🇪🇸🇮🇸🇫🇷🇩🇰🇧🇪🇬🇧🇯🇵🇦🇺🇨🇭🇩🇿🇱🇹🇲🇦🇷🇸🇫🇷🇧🇬🇮🇳🇪🇸🇭🇷🇲🇨🇳🇱🇯🇵🇩🇿🇩🇰🇪🇸🇫🇷🇦🇺🇹🇼🇧🇬🇭🇳🇱🇮🇶🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹🇹^C
>>> General tech news and trends
⠹
```
New conversations can not be started. Bug may only be reproduced with the correct seed, as mistral usually does not have any temperature problems.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2211/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5290
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5290/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5290/comments
|
https://api.github.com/repos/ollama/ollama/issues/5290/events
|
https://github.com/ollama/ollama/issues/5290
| 2,374,312,047
|
I_kwDOJ0Z1Ps6NhSBv
| 5,290
|
ollama-go bindings
|
{
"login": "k0marov",
"id": 95040709,
"node_id": "U_kgDOBao0xQ",
"avatar_url": "https://avatars.githubusercontent.com/u/95040709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/k0marov",
"html_url": "https://github.com/k0marov",
"followers_url": "https://api.github.com/users/k0marov/followers",
"following_url": "https://api.github.com/users/k0marov/following{/other_user}",
"gists_url": "https://api.github.com/users/k0marov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/k0marov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/k0marov/subscriptions",
"organizations_url": "https://api.github.com/users/k0marov/orgs",
"repos_url": "https://api.github.com/users/k0marov/repos",
"events_url": "https://api.github.com/users/k0marov/events{/privacy}",
"received_events_url": "https://api.github.com/users/k0marov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-06-26T05:19:40
| 2024-07-08T23:19:18
| 2024-07-08T23:19:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, I'm interested in having a native Go client library for Ollama REST API, like the Python and JS ones.
I can start myself, but want to ask: is someone already working on it?
If it's not taken, I'll be glad to make this contribution.
Thanks for this awesome system!
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5290/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1132
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1132/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1132/comments
|
https://api.github.com/repos/ollama/ollama/issues/1132/events
|
https://github.com/ollama/ollama/pull/1132
| 1,993,715,970
|
PR_kwDOJ0Z1Ps5fdnos
| 1,132
|
replace go-humanize with format.HumanBytes
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-14T22:58:03
| 2023-11-15T17:46:23
| 2023-11-15T17:46:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1132",
"html_url": "https://github.com/ollama/ollama/pull/1132",
"diff_url": "https://github.com/ollama/ollama/pull/1132.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1132.patch",
"merged_at": "2023-11-15T17:46:22"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1132/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2833
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2833/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2833/comments
|
https://api.github.com/repos/ollama/ollama/issues/2833/events
|
https://github.com/ollama/ollama/issues/2833
| 2,161,303,468
|
I_kwDOJ0Z1Ps6A0t-s
| 2,833
|
Running ollama on Hugging Face Spaces
|
{
"login": "jbdatascience",
"id": 33154192,
"node_id": "MDQ6VXNlcjMzMTU0MTky",
"avatar_url": "https://avatars.githubusercontent.com/u/33154192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbdatascience",
"html_url": "https://github.com/jbdatascience",
"followers_url": "https://api.github.com/users/jbdatascience/followers",
"following_url": "https://api.github.com/users/jbdatascience/following{/other_user}",
"gists_url": "https://api.github.com/users/jbdatascience/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbdatascience/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbdatascience/subscriptions",
"organizations_url": "https://api.github.com/users/jbdatascience/orgs",
"repos_url": "https://api.github.com/users/jbdatascience/repos",
"events_url": "https://api.github.com/users/jbdatascience/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbdatascience/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-02-29T13:44:33
| 2024-06-24T16:05:20
| 2024-05-17T22:59:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I want to run ollama on Hugging Face Spaces, because I run a Streamlit app there that must make use of a LLM and a embedding model served by Ollama. How can I do that?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2833/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/3274
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3274/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3274/comments
|
https://api.github.com/repos/ollama/ollama/issues/3274/events
|
https://github.com/ollama/ollama/pull/3274
| 2,198,201,168
|
PR_kwDOJ0Z1Ps5qQgQh
| 3,274
|
Community Integration: tlm - cli copilot with ollama
|
{
"login": "yusufcanb",
"id": 9295668,
"node_id": "MDQ6VXNlcjkyOTU2Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yusufcanb",
"html_url": "https://github.com/yusufcanb",
"followers_url": "https://api.github.com/users/yusufcanb/followers",
"following_url": "https://api.github.com/users/yusufcanb/following{/other_user}",
"gists_url": "https://api.github.com/users/yusufcanb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yusufcanb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yusufcanb/subscriptions",
"organizations_url": "https://api.github.com/users/yusufcanb/orgs",
"repos_url": "https://api.github.com/users/yusufcanb/repos",
"events_url": "https://api.github.com/users/yusufcanb/events{/privacy}",
"received_events_url": "https://api.github.com/users/yusufcanb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-20T17:58:25
| 2024-03-25T18:53:27
| 2024-03-25T18:53:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3274",
"html_url": "https://github.com/ollama/ollama/pull/3274",
"diff_url": "https://github.com/ollama/ollama/pull/3274.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3274.patch",
"merged_at": "2024-03-25T18:53:26"
}
|
I have been advised to create a PR to include [tlm](https://github.com/yusufcanb/tlm) inside [README.md](https://github.com/ollama/ollama/blob/main/README.md) during the KubeCon 2024 Paris by the Ollama staff. Thanks for all of them who expressed their excitement for what I've created. ❤
So, here is the PR to include [tlm](https://github.com/yusufcanb/tlm) in Terminal section. Thanks again for the advise.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3274/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2097
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2097/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2097/comments
|
https://api.github.com/repos/ollama/ollama/issues/2097/events
|
https://github.com/ollama/ollama/issues/2097
| 2,090,902,481
|
I_kwDOJ0Z1Ps58oKPR
| 2,097
|
Overwriting an existing model from a modelfile leaves old blob not deleted
|
{
"login": "hyjwei",
"id": 76876891,
"node_id": "MDQ6VXNlcjc2ODc2ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/76876891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyjwei",
"html_url": "https://github.com/hyjwei",
"followers_url": "https://api.github.com/users/hyjwei/followers",
"following_url": "https://api.github.com/users/hyjwei/following{/other_user}",
"gists_url": "https://api.github.com/users/hyjwei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hyjwei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyjwei/subscriptions",
"organizations_url": "https://api.github.com/users/hyjwei/orgs",
"repos_url": "https://api.github.com/users/hyjwei/repos",
"events_url": "https://api.github.com/users/hyjwei/events{/privacy}",
"received_events_url": "https://api.github.com/users/hyjwei/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-01-19T16:40:03
| 2024-01-22T17:37:50
| 2024-01-22T17:37:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Problem ###
When I import a GGUF model into ollama, I create a modelfile with "FROM" line and then run `ollama create`, and a blob is created in model directory.
Then I decide to import another GGUF model (different quant parameters), I modify the "FROM" line and the run `ollama create` again. A new blob is created, but the old blob is still in model directory.
If I run `ollama rm` to remove the model, only the second blob is deleted but the old one is still there. I don't know how to properly delete that old blob using ollama command line and I have to delete the file manually.
### Expected behavior ###
When I overwrite a existing model using `ollama create` command, the old blobs should be removed.
Or, there should be an option, like `fsck`, to purge the obsolete blobs from model directory.
Regards,
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2097/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2097/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6047
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6047/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6047/comments
|
https://api.github.com/repos/ollama/ollama/issues/6047/events
|
https://github.com/ollama/ollama/issues/6047
| 2,435,325,229
|
I_kwDOJ0Z1Ps6RKB0t
| 6,047
|
Ollama
|
{
"login": "wAyNecheRui",
"id": 176916787,
"node_id": "U_kgDOCouJMw",
"avatar_url": "https://avatars.githubusercontent.com/u/176916787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wAyNecheRui",
"html_url": "https://github.com/wAyNecheRui",
"followers_url": "https://api.github.com/users/wAyNecheRui/followers",
"following_url": "https://api.github.com/users/wAyNecheRui/following{/other_user}",
"gists_url": "https://api.github.com/users/wAyNecheRui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wAyNecheRui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wAyNecheRui/subscriptions",
"organizations_url": "https://api.github.com/users/wAyNecheRui/orgs",
"repos_url": "https://api.github.com/users/wAyNecheRui/repos",
"events_url": "https://api.github.com/users/wAyNecheRui/events{/privacy}",
"received_events_url": "https://api.github.com/users/wAyNecheRui/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-29T12:46:45
| 2024-07-30T16:31:45
| 2024-07-30T16:31:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
it is very slow while loading in the command prompt
### OS
Windows
### GPU
_No response_
### CPU
Other
### Ollama version
llama2
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6047/reactions",
"total_count": 1,
"+1": 0,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6047/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3912
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3912/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3912/comments
|
https://api.github.com/repos/ollama/ollama/issues/3912/events
|
https://github.com/ollama/ollama/issues/3912
| 2,263,842,810
|
I_kwDOJ0Z1Ps6G73_6
| 3,912
|
Server hang after ~400 long context requests mixtral or llama3 ollama 0.1.32
|
{
"login": "kungfu-eric",
"id": 87145506,
"node_id": "MDQ6VXNlcjg3MTQ1NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/87145506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kungfu-eric",
"html_url": "https://github.com/kungfu-eric",
"followers_url": "https://api.github.com/users/kungfu-eric/followers",
"following_url": "https://api.github.com/users/kungfu-eric/following{/other_user}",
"gists_url": "https://api.github.com/users/kungfu-eric/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kungfu-eric/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kungfu-eric/subscriptions",
"organizations_url": "https://api.github.com/users/kungfu-eric/orgs",
"repos_url": "https://api.github.com/users/kungfu-eric/repos",
"events_url": "https://api.github.com/users/kungfu-eric/events{/privacy}",
"received_events_url": "https://api.github.com/users/kungfu-eric/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-04-25T15:04:05
| 2024-05-09T22:32:53
| 2024-05-09T22:32:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hangs after about 400 long context requests on mixtral and same with llama3
```
ollama --version
ollama version is 0.1.32
```
This is on AMD CPU, 2x NVIDIA A6000s, Ubuntu 18.04 in a docker container. Client is using the python ollama package. Workaround by restarting server manually and using asyncio.wait_for in the client.
> Please give 0.1.32 a try and let us know if you're still seeing unrecoverable hangs.
The hang continues to output this on the ollama server but no response is given to the client:
```
{"function":"update_slots","level":"INFO","line":1601,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":1,"n_left":2046,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":105393,"tid":"140517846056960","timestamp":1714056803}
{"function":"update_slots","level":"INFO","line":1601,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":1,"n_left":2046,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":105393,"tid":"140517846056960","timestamp":1714056823}
{"function":"update_slots","level":"INFO","line":1601,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":1,"n_left":2046,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":105393,"tid":"140517846056960","timestamp":1714056843}
```
Maybe related to https://github.com/ollama/ollama/issues/1863
### OS
Linux, Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3912/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3912/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1274
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1274/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1274/comments
|
https://api.github.com/repos/ollama/ollama/issues/1274/events
|
https://github.com/ollama/ollama/issues/1274
| 2,010,492,898
|
I_kwDOJ0Z1Ps531a_i
| 1,274
|
"no such file or directory" when creating model during the "creating adapter layer" step
|
{
"login": "meow-d",
"id": 51119160,
"node_id": "MDQ6VXNlcjUxMTE5MTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/51119160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meow-d",
"html_url": "https://github.com/meow-d",
"followers_url": "https://api.github.com/users/meow-d/followers",
"following_url": "https://api.github.com/users/meow-d/following{/other_user}",
"gists_url": "https://api.github.com/users/meow-d/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meow-d/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meow-d/subscriptions",
"organizations_url": "https://api.github.com/users/meow-d/orgs",
"repos_url": "https://api.github.com/users/meow-d/repos",
"events_url": "https://api.github.com/users/meow-d/events{/privacy}",
"received_events_url": "https://api.github.com/users/meow-d/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2023-11-25T06:09:47
| 2024-01-18T23:50:30
| 2024-01-18T23:50:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
when i run `ollama create storywriter`, i get:
```
transferring model data
reading model metadata
creating template layer
creating system layer
creating adapter layer
Error: open /@sha256:439bdfbd08b0143c5f5f97154d76676a5348a5a00a2fac38fdc8d1c4498d67d3: no such file or directory
```
btw i'm running on Fedora 39
my Modelfile, just in case:
```
FROM llama2-uncensored:latest
TEMPLATE """{{ .System }}
### HUMAN:
{{ .Prompt }}
### RESPONSE:
"""
PARAMETER stop "### HUMAN:"
PARAMETER stop "### RESPONSE:"
SYSTEM """
"""
ADAPTER ./adapter_model.bin
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1274/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3884
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3884/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3884/comments
|
https://api.github.com/repos/ollama/ollama/issues/3884/events
|
https://github.com/ollama/ollama/pull/3884
| 2,261,741,869
|
PR_kwDOJ0Z1Ps5toHaF
| 3,884
|
docs: add Hollama to Web & Desktop integrations
|
{
"login": "fmaclen",
"id": 1434675,
"node_id": "MDQ6VXNlcjE0MzQ2NzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1434675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fmaclen",
"html_url": "https://github.com/fmaclen",
"followers_url": "https://api.github.com/users/fmaclen/followers",
"following_url": "https://api.github.com/users/fmaclen/following{/other_user}",
"gists_url": "https://api.github.com/users/fmaclen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fmaclen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fmaclen/subscriptions",
"organizations_url": "https://api.github.com/users/fmaclen/orgs",
"repos_url": "https://api.github.com/users/fmaclen/repos",
"events_url": "https://api.github.com/users/fmaclen/events{/privacy}",
"received_events_url": "https://api.github.com/users/fmaclen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-24T16:46:34
| 2024-05-07T20:17:36
| 2024-05-07T20:17:36
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3884",
"html_url": "https://github.com/ollama/ollama/pull/3884",
"diff_url": "https://github.com/ollama/ollama/pull/3884.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3884.patch",
"merged_at": "2024-05-07T20:17:36"
}
|
**Hollama** is a minimal web-UI for talking to Ollama servers.
https://hollama.fernando.is
**Repository:**
https://github.com/fmaclen/hollama
**Current features:**
- Large prompt fields
- Streams completions
- Copy completions as raw text
- Markdown parsing w/syntax highlighting
- Saves sessions/context in your browser's localStorage
- With [more to come](https://github.com/fmaclen/hollama/issues)...
**Screenshots:**
> 
> 
> 
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3884/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5451
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5451/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5451/comments
|
https://api.github.com/repos/ollama/ollama/issues/5451/events
|
https://github.com/ollama/ollama/issues/5451
| 2,387,413,448
|
I_kwDOJ0Z1Ps6OTQnI
| 5,451
|
Speech-To-Text Transcription
|
{
"login": "HerroHK",
"id": 170845944,
"node_id": "U_kgDOCi7m-A",
"avatar_url": "https://avatars.githubusercontent.com/u/170845944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HerroHK",
"html_url": "https://github.com/HerroHK",
"followers_url": "https://api.github.com/users/HerroHK/followers",
"following_url": "https://api.github.com/users/HerroHK/following{/other_user}",
"gists_url": "https://api.github.com/users/HerroHK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HerroHK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HerroHK/subscriptions",
"organizations_url": "https://api.github.com/users/HerroHK/orgs",
"repos_url": "https://api.github.com/users/HerroHK/repos",
"events_url": "https://api.github.com/users/HerroHK/events{/privacy}",
"received_events_url": "https://api.github.com/users/HerroHK/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-07-03T00:56:45
| 2024-07-03T16:33:20
| 2024-07-03T16:33:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Issue: our company has audio recordings that are confidential in nature. We have setup a linux server (Ubuntu) running Ollama with both Open-WebUI and AnythingLLM as interface. However, it seems both are not able to transcribe long (up to 8 hours) audio recordings, and we only get back snippets. It is also unclear where the boundaries are in terms of time, as it seems some parts do get translated.
It would make a great addition to Ollama if we could make use of Whisper or other models locally to do this.
I am pretty sure it is a very common use-case of AI, with plenty of "commercially available options". But it is mostly that we can't use commercial services easily to fullfill our needs and contracts at the same time.
Thanks for considering this.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5451/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5451/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1391
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1391/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1391/comments
|
https://api.github.com/repos/ollama/ollama/issues/1391/events
|
https://github.com/ollama/ollama/issues/1391
| 2,026,828,139
|
I_kwDOJ0Z1Ps54zvFr
| 1,391
|
Totally stumped :-(
|
{
"login": "itscvenk",
"id": 117738376,
"node_id": "U_kgDOBwSLiA",
"avatar_url": "https://avatars.githubusercontent.com/u/117738376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itscvenk",
"html_url": "https://github.com/itscvenk",
"followers_url": "https://api.github.com/users/itscvenk/followers",
"following_url": "https://api.github.com/users/itscvenk/following{/other_user}",
"gists_url": "https://api.github.com/users/itscvenk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itscvenk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itscvenk/subscriptions",
"organizations_url": "https://api.github.com/users/itscvenk/orgs",
"repos_url": "https://api.github.com/users/itscvenk/repos",
"events_url": "https://api.github.com/users/itscvenk/events{/privacy}",
"received_events_url": "https://api.github.com/users/itscvenk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2023-12-05T18:03:32
| 2023-12-07T08:02:55
| 2023-12-06T16:45:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have this in the config (and yes, it is below and above the respective sections, as i learnt the hard way, LOL)
```
Environment="OLLAMA_HOST=mysubdomain.domain.com:11434"
Environment="OLLAMA_ORIGINS='my.ip.in.v4'"
```
Actual values were used above, server was also rebooted (as restarting the service had no effect)
And, with localhost, it works fine: for example:
```
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt":"Why is the sky blue?"
}'
```
But when I use mysubdomain.domain.com, i get connection refused even when try from a shell on the same host :-(
And doesn't matter if I use http or https. I have installed let's encrypt certificates on the server
```
curl http://mysubdomain.domain.com:11434/api/generate -d '{
> "model": "llama2",
he sky b> "prompt":"Why is the sky blue?"
> }'
curl: (7) Failed to connect to mysubdomain.mydomain.com port 11434 after 140 ms: Connection refused
```
This has me totally foxed! The http call should work, right? And, i hope https will work remote if it is allowed in "OLLAMA_ORIGINS" in the config
Please help
Thanks
|
{
"login": "itscvenk",
"id": 117738376,
"node_id": "U_kgDOBwSLiA",
"avatar_url": "https://avatars.githubusercontent.com/u/117738376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itscvenk",
"html_url": "https://github.com/itscvenk",
"followers_url": "https://api.github.com/users/itscvenk/followers",
"following_url": "https://api.github.com/users/itscvenk/following{/other_user}",
"gists_url": "https://api.github.com/users/itscvenk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itscvenk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itscvenk/subscriptions",
"organizations_url": "https://api.github.com/users/itscvenk/orgs",
"repos_url": "https://api.github.com/users/itscvenk/repos",
"events_url": "https://api.github.com/users/itscvenk/events{/privacy}",
"received_events_url": "https://api.github.com/users/itscvenk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1391/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2276
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2276/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2276/comments
|
https://api.github.com/repos/ollama/ollama/issues/2276/events
|
https://github.com/ollama/ollama/issues/2276
| 2,108,156,325
|
I_kwDOJ0Z1Ps59p-ml
| 2,276
|
Unhandled Runtime Error
|
{
"login": "hamperia4",
"id": 98347762,
"node_id": "U_kgDOBdyq8g",
"avatar_url": "https://avatars.githubusercontent.com/u/98347762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamperia4",
"html_url": "https://github.com/hamperia4",
"followers_url": "https://api.github.com/users/hamperia4/followers",
"following_url": "https://api.github.com/users/hamperia4/following{/other_user}",
"gists_url": "https://api.github.com/users/hamperia4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamperia4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamperia4/subscriptions",
"organizations_url": "https://api.github.com/users/hamperia4/orgs",
"repos_url": "https://api.github.com/users/hamperia4/repos",
"events_url": "https://api.github.com/users/hamperia4/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamperia4/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-30T15:33:36
| 2024-02-20T04:08:17
| 2024-02-20T04:08:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Although SUPABASE_URL and SUPABASE_ANON_KEY are correct after running nvm getting below error locally:
<img width="963" alt="Screenshot 2024-01-30 at 5 33 15 PM" src="https://github.com/ollama/ollama/assets/98347762/b3e7370c-934e-4db2-ada7-1062d129a201">
Any ideas?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2276/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4627
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4627/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4627/comments
|
https://api.github.com/repos/ollama/ollama/issues/4627/events
|
https://github.com/ollama/ollama/pull/4627
| 2,316,628,597
|
PR_kwDOJ0Z1Ps5wiCOz
| 4,627
|
Add OLLAMA_MAX_DOWNLOAD_PARTS env to support config parallel download parts
|
{
"login": "coolljt0725",
"id": 8232360,
"node_id": "MDQ6VXNlcjgyMzIzNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8232360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolljt0725",
"html_url": "https://github.com/coolljt0725",
"followers_url": "https://api.github.com/users/coolljt0725/followers",
"following_url": "https://api.github.com/users/coolljt0725/following{/other_user}",
"gists_url": "https://api.github.com/users/coolljt0725/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coolljt0725/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coolljt0725/subscriptions",
"organizations_url": "https://api.github.com/users/coolljt0725/orgs",
"repos_url": "https://api.github.com/users/coolljt0725/repos",
"events_url": "https://api.github.com/users/coolljt0725/events{/privacy}",
"received_events_url": "https://api.github.com/users/coolljt0725/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-05-25T03:03:55
| 2024-12-29T19:28:50
| 2024-12-29T19:28:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4627",
"html_url": "https://github.com/ollama/ollama/pull/4627",
"diff_url": "https://github.com/ollama/ollama/pull/4627.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4627.patch",
"merged_at": null
}
|
Add a environment `OLLAMA_MAX_DOWNLOAD_PARTS` to support config maximum download parts in parallel.
This PR closes #4595
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4627/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4627/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1561
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1561/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1561/comments
|
https://api.github.com/repos/ollama/ollama/issues/1561/events
|
https://github.com/ollama/ollama/issues/1561
| 2,044,668,218
|
I_kwDOJ0Z1Ps553yk6
| 1,561
|
GPU not being used and 'out of memory' - 'no CUDA-capable device is detected' errors while running on Docker Compose
|
{
"login": "seth100",
"id": 4366877,
"node_id": "MDQ6VXNlcjQzNjY4Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4366877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seth100",
"html_url": "https://github.com/seth100",
"followers_url": "https://api.github.com/users/seth100/followers",
"following_url": "https://api.github.com/users/seth100/following{/other_user}",
"gists_url": "https://api.github.com/users/seth100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seth100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seth100/subscriptions",
"organizations_url": "https://api.github.com/users/seth100/orgs",
"repos_url": "https://api.github.com/users/seth100/repos",
"events_url": "https://api.github.com/users/seth100/events{/privacy}",
"received_events_url": "https://api.github.com/users/seth100/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2023-12-16T08:47:02
| 2024-02-01T23:18:24
| 2024-02-01T23:18:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm using the following docker compose file:
```yml
ollama:
image: ollama/ollama:latest
container_name: ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
volumes:
- ./ollama:/root/.ollama
ports:
- 11434:11434
tty: true
restart: unless-stopped
```
I'm on Ubuntu 22.04.
The GPU is a `GeForce GTX 1660 OC edition 6GB GDDR5` and `nvidia-container-toolkit` is installed.
Here is the outpot of `$ docker exec -it ollama nvidia-smi`:
```sh
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.06 Driver Version: 545.29.06 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce GTX 1660 Off | 00000000:26:00.0 On | N/A |
| 27% 36C P5 10W / 120W | 887MiB / 6144MiB | 8% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
```
The issue is that I get the following errors and only the CPU is used while running ollama, GPU is like in idle:
**`llama2`, `mistral`**:
```sh
ollama | CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9080: out of memory
ollama | current device: 0
ollama | GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9080: !"CUDA error"
ollama | 2023/12/16 08:19:26 llama.go:451: 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9080: out of memory
ollama | current device: 0
ollama | GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9080: !"CUDA error"
ollama | 2023/12/16 08:19:26 llama.go:459: error starting llama runner: llama runner process has terminated
ollama | 2023/12/16 08:19:26 llama.go:525: llama runner stopped successfully
ollama | 2023/12/16 08:19:26 llama.go:436: starting llama runner
ollama | 2023/12/16 08:19:26 llama.go:494: waiting for llama runner to start responding
ollama | {"timestamp":1702714766,"level":"WARNING","function":"server_params_parse","line":2148,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1}
ollama | {"timestamp":1702714766,"level":"INFO","function":"main","line":2652,"message":"build info","build":441,"commit":"948ff13"}
ollama | {"timestamp":1702714766,"level":"INFO","function":"main","line":2655,"message":"system info","n_threads":6,"n_threads_batch":-1,"total_threads":12,"system_info":"AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
```
**`orca-mini`**:
```sh
ollama | CUDA error 100 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:493: no CUDA-capable device is detected
ollama | current device: 624750624
ollama | GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:493: !"CUDA error"
ollama | 2023/12/16 08:48:01 llama.go:451: 100 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:493: no CUDA-capable device is detected
ollama | current device: 624750624
ollama | GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:493: !"CUDA error"
ollama | 2023/12/16 08:48:01 llama.go:459: error starting llama runner: llama runner process has terminated
ollama | 2023/12/16 08:48:01 llama.go:525: llama runner stopped successfully
ollama | 2023/12/16 08:48:01 llama.go:436: starting llama runner
ollama | 2023/12/16 08:48:01 llama.go:494: waiting for llama runner to start responding
ollama | {"timestamp":1702716481,"level":"WARNING","function":"server_params_parse","line":2148,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1}
ollama | {"timestamp":1702716481,"level":"INFO","function":"main","line":2652,"message":"build info","build":441,"commit":"948ff13"}
ollama | {"timestamp":1702716481,"level":"INFO","function":"main","line":2655,"message":"system info","n_threads":6,"n_threads_batch":-1,"total_threads":12,"system_info":"AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
```
I noticed from other issues that some of those errors are common for other people, is that a bug or am I doing anything wrong?
I also tried to add in the compose yaml file:
```sh
runtime: nvidia
cap_add:
- SYS_ADMIN
privileged: true
environment:
- NVIDIA_DRIVER_CAPABILITIES=compute,utility
- NVIDIA_VISIBLE_DEVICES=all
```
but I get same results!
Thanks
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1561/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1561/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4848
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4848/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4848/comments
|
https://api.github.com/repos/ollama/ollama/issues/4848/events
|
https://github.com/ollama/ollama/pull/4848
| 2,337,573,704
|
PR_kwDOJ0Z1Ps5xpQKl
| 4,848
|
Add qollama to list of Web & Desktop integrations
|
{
"login": "farleyrunkel",
"id": 162782461,
"node_id": "U_kgDOCbPc_Q",
"avatar_url": "https://avatars.githubusercontent.com/u/162782461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farleyrunkel",
"html_url": "https://github.com/farleyrunkel",
"followers_url": "https://api.github.com/users/farleyrunkel/followers",
"following_url": "https://api.github.com/users/farleyrunkel/following{/other_user}",
"gists_url": "https://api.github.com/users/farleyrunkel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farleyrunkel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farleyrunkel/subscriptions",
"organizations_url": "https://api.github.com/users/farleyrunkel/orgs",
"repos_url": "https://api.github.com/users/farleyrunkel/repos",
"events_url": "https://api.github.com/users/farleyrunkel/events{/privacy}",
"received_events_url": "https://api.github.com/users/farleyrunkel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-06-06T07:45:57
| 2024-11-28T10:30:42
| 2024-11-21T09:39:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4848",
"html_url": "https://github.com/ollama/ollama/pull/4848",
"diff_url": "https://github.com/ollama/ollama/pull/4848.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4848.patch",
"merged_at": null
}
|
QOllama is a Qt-based client for [ollama](https://github.com/ollama/ollama), providing a user-friendly interface for interacting with the model and managing chat history. It supports cross-platform functionality, ensuring a seamless experience on Windows, macOS, and Linux.
Goto QOllama: [https://github.com/farleyrunkel/qollama](https://github.com/farleyrunkel/qollama)

| Linux | Windows | MacOS |
| :---: | :---: | :---: |
| ... | |  |
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4848/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/948
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/948/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/948/comments
|
https://api.github.com/repos/ollama/ollama/issues/948/events
|
https://github.com/ollama/ollama/pull/948
| 1,968,699,514
|
PR_kwDOJ0Z1Ps5eI21V
| 948
|
Fix conversion command for gptneox
|
{
"login": "dloss",
"id": 744603,
"node_id": "MDQ6VXNlcjc0NDYwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/744603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dloss",
"html_url": "https://github.com/dloss",
"followers_url": "https://api.github.com/users/dloss/followers",
"following_url": "https://api.github.com/users/dloss/following{/other_user}",
"gists_url": "https://api.github.com/users/dloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dloss/subscriptions",
"organizations_url": "https://api.github.com/users/dloss/orgs",
"repos_url": "https://api.github.com/users/dloss/repos",
"events_url": "https://api.github.com/users/dloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/dloss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-30T15:52:56
| 2023-10-30T18:34:29
| 2023-10-30T18:34:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/948",
"html_url": "https://github.com/ollama/ollama/pull/948",
"diff_url": "https://github.com/ollama/ollama/pull/948.diff",
"patch_url": "https://github.com/ollama/ollama/pull/948.patch",
"merged_at": "2023-10-30T18:34:29"
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/948/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3326
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3326/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3326/comments
|
https://api.github.com/repos/ollama/ollama/issues/3326/events
|
https://github.com/ollama/ollama/issues/3326
| 2,204,445,373
|
I_kwDOJ0Z1Ps6DZSq9
| 3,326
|
Sha256 code mismatch pulling a model
|
{
"login": "ipsmile",
"id": 28075439,
"node_id": "MDQ6VXNlcjI4MDc1NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/28075439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ipsmile",
"html_url": "https://github.com/ipsmile",
"followers_url": "https://api.github.com/users/ipsmile/followers",
"following_url": "https://api.github.com/users/ipsmile/following{/other_user}",
"gists_url": "https://api.github.com/users/ipsmile/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ipsmile/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ipsmile/subscriptions",
"organizations_url": "https://api.github.com/users/ipsmile/orgs",
"repos_url": "https://api.github.com/users/ipsmile/repos",
"events_url": "https://api.github.com/users/ipsmile/events{/privacy}",
"received_events_url": "https://api.github.com/users/ipsmile/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-24T16:39:05
| 2024-03-27T22:43:05
| 2024-03-27T22:43:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Received the following messages while executing "ollama pull wizard-vicuna"
Error: digest mismatch, file must be downloaded again: want sha256:1ede1e83f21c3c72f7b1ce304920a3d8f6eaf8304cfda8fd82864287033175dc, got sha256:5130a22afc1df70a9babbe0d8843a6a65fd6647cc8d4836a476896fc61f0e3aa
### What did you expect to see?
free of error message
### Steps to reproduce
Just issue the command in a terminal: ollama pull wizard-vicuna
### Are there any recent changes that introduced the issue?
No, this is the ollama package installed about a month ago. It has not been updated since then.
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.25
### GPU
AMD
### GPU info
GPU: integrated Vega GPU in AMD Ryzen 5 5700G
rocminfo output:
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen 7 5700G with Radeon Graphics
Uuid: CPU-XX
Marketing Name: AMD Ryzen 7 5700G with Radeon Graphics
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 3800
BDFID: 0
Internal Node ID: 0
Compute Unit: 16
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 65158508(0x3e23d6c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 65158508(0x3e23d6c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 65158508(0x3e23d6c) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: gfx90c
Uuid: GPU-XX
Marketing Name:
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 4096(0x1000)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 16(0x10) KB
L2: 1024(0x400) KB
Chip ID: 5688(0x1638)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2000
BDFID: 1792
Internal Node ID: 1
Compute Unit: 8
SIMDs per CU: 4
Shader Engines: 1
Shader Arrs. per Eng.: 1
WatchPts on Addr. Ranges:4
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 64(0x40)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 40(0x28)
Max Work-item Per CU: 2560(0xa00)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 524288(0x80000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx90c:xnack-
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***
### CPU
AMD
### Other software
A fairly clean Linux Mint 21.3, none of other software could be the suspect causing this mismatched SHA256 error.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3326/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/7447
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7447/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7447/comments
|
https://api.github.com/repos/ollama/ollama/issues/7447/events
|
https://github.com/ollama/ollama/issues/7447
| 2,626,601,072
|
I_kwDOJ0Z1Ps6cjsBw
| 7,447
|
Feature Request: count tokens before calling '/v1/chat/completions'
|
{
"login": "GPTLocalhost",
"id": 72584872,
"node_id": "MDQ6VXNlcjcyNTg0ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/72584872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GPTLocalhost",
"html_url": "https://github.com/GPTLocalhost",
"followers_url": "https://api.github.com/users/GPTLocalhost/followers",
"following_url": "https://api.github.com/users/GPTLocalhost/following{/other_user}",
"gists_url": "https://api.github.com/users/GPTLocalhost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GPTLocalhost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GPTLocalhost/subscriptions",
"organizations_url": "https://api.github.com/users/GPTLocalhost/orgs",
"repos_url": "https://api.github.com/users/GPTLocalhost/repos",
"events_url": "https://api.github.com/users/GPTLocalhost/events{/privacy}",
"received_events_url": "https://api.github.com/users/GPTLocalhost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-31T11:17:15
| 2024-12-02T14:49:51
| 2024-12-02T14:49:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Recently, we integrated Microsoft Word with Ollama through a local Word Add-in. You can view a demo [here](https://gptlocalhost.com/demo/). We're planning to add a feature to count tokens before calling '/v1/chat/completions,' allowing users to see the remaining tokens available for inference. Our question is: Is it possible for Ollama to count the tokens of the prompt before calling '/v1/chat/completions'? Thank you for your advice.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7447/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/5849
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5849/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5849/comments
|
https://api.github.com/repos/ollama/ollama/issues/5849/events
|
https://github.com/ollama/ollama/issues/5849
| 2,422,642,175
|
I_kwDOJ0Z1Ps6QZpX_
| 5,849
|
How to force the use of two GPUs to run a model?
|
{
"login": "mizzlefeng",
"id": 54129071,
"node_id": "MDQ6VXNlcjU0MTI5MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/54129071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mizzlefeng",
"html_url": "https://github.com/mizzlefeng",
"followers_url": "https://api.github.com/users/mizzlefeng/followers",
"following_url": "https://api.github.com/users/mizzlefeng/following{/other_user}",
"gists_url": "https://api.github.com/users/mizzlefeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mizzlefeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mizzlefeng/subscriptions",
"organizations_url": "https://api.github.com/users/mizzlefeng/orgs",
"repos_url": "https://api.github.com/users/mizzlefeng/repos",
"events_url": "https://api.github.com/users/mizzlefeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mizzlefeng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-07-22T11:32:17
| 2024-07-22T22:22:56
| 2024-07-22T22:22:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have reviewed many issues, including [#4198](https://github.com/ollama/ollama/issues/4198), [#4517](https://github.com/ollama/ollama/pull/4517) and so on.
I found that the explanation given is that if the graphics memory of a single GPU is sufficient to run the current model, then it will not use more GPUs. But what should I do if I force him to use two GPUs evenly? Even if I set OLLAMA_NUMPARALLEL to 2, it was ineffective and only one GPU was used in the end.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5849/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4331
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4331/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4331/comments
|
https://api.github.com/repos/ollama/ollama/issues/4331/events
|
https://github.com/ollama/ollama/pull/4331
| 2,290,532,999
|
PR_kwDOJ0Z1Ps5vJDWe
| 4,331
|
Fix envconfig unit test
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-10T23:50:11
| 2024-05-11T16:16:28
| 2024-05-11T16:16:28
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4331",
"html_url": "https://github.com/ollama/ollama/pull/4331",
"diff_url": "https://github.com/ollama/ollama/pull/4331.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4331.patch",
"merged_at": "2024-05-11T16:16:28"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4331/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2898
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2898/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2898/comments
|
https://api.github.com/repos/ollama/ollama/issues/2898/events
|
https://github.com/ollama/ollama/issues/2898
| 2,165,486,408
|
I_kwDOJ0Z1Ps6BErNI
| 2,898
|
v0.1.28 RC: CUDA error: out of memory
|
{
"login": "ovaisq",
"id": 9484502,
"node_id": "MDQ6VXNlcjk0ODQ1MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9484502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ovaisq",
"html_url": "https://github.com/ovaisq",
"followers_url": "https://api.github.com/users/ovaisq/followers",
"following_url": "https://api.github.com/users/ovaisq/following{/other_user}",
"gists_url": "https://api.github.com/users/ovaisq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ovaisq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ovaisq/subscriptions",
"organizations_url": "https://api.github.com/users/ovaisq/orgs",
"repos_url": "https://api.github.com/users/ovaisq/repos",
"events_url": "https://api.github.com/users/ovaisq/events{/privacy}",
"received_events_url": "https://api.github.com/users/ovaisq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-03-03T18:36:35
| 2024-03-12T01:33:38
| 2024-03-12T01:33:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama v0.1.28 RC
Ryzen 7 1700 - 48GB RAM - 500GB SSD
GeForce GTX 1070ti 8GB VRAM - Driver v551.61
Windows 11 Pro
My Python code (running on a Debian 12 instance - making remote calls over local network) is looping through deepseek-llm, llama2, gemma LLMs doing this:
client = AsyncClient(host='OLLAMA_API_URL')
response = await client.chat(
model=llm,
stream=False,
messages=[
{
'role': 'user',
'content': content
},
],
options = {
'temperature' : 0
}
)
Ollama Server crashes at around 10th iteration.
Ollama Crash error:
CUDA error: out of memory
current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jmorg\git\ollama\llm\llama.cpp\ggml-cuda.cu:8587
cuMemAddressReserve(&g_cuda_pool_addr[device], CUDA_POOL_VMM_MAX_SIZE, 0, 0, 0)
GGML_ASSERT: C:\Users\jmorg\git\ollama\llm\llama.cpp\ggml-cuda.cu:256: !"CUDA error"
Please let me know if any further information is needed.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2898/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7424
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7424/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7424/comments
|
https://api.github.com/repos/ollama/ollama/issues/7424/events
|
https://github.com/ollama/ollama/pull/7424
| 2,624,694,826
|
PR_kwDOJ0Z1Ps6AamBJ
| 7,424
|
boost embed endpoint
|
{
"login": "liuy",
"id": 1192888,
"node_id": "MDQ6VXNlcjExOTI4ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1192888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liuy",
"html_url": "https://github.com/liuy",
"followers_url": "https://api.github.com/users/liuy/followers",
"following_url": "https://api.github.com/users/liuy/following{/other_user}",
"gists_url": "https://api.github.com/users/liuy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liuy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liuy/subscriptions",
"organizations_url": "https://api.github.com/users/liuy/orgs",
"repos_url": "https://api.github.com/users/liuy/repos",
"events_url": "https://api.github.com/users/liuy/events{/privacy}",
"received_events_url": "https://api.github.com/users/liuy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 5
| 2024-10-30T16:43:31
| 2025-01-02T18:49:23
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7424",
"html_url": "https://github.com/ollama/ollama/pull/7424",
"diff_url": "https://github.com/ollama/ollama/pull/7424.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7424.patch",
"merged_at": null
}
|
just get token numbers in the runner instead of route.
Even on following simplest request, I got nearly 20x boost.
curl http://localhost:11434/api/embed -d '{
"model": "all-minilm",
"input": ["Why is the sky blue?", "Why is the grass green?"]
}'
new approach: "total_duration":14239148
old approach: "total_duration":240871657
fix #7400
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7424/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7424/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6211
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6211/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6211/comments
|
https://api.github.com/repos/ollama/ollama/issues/6211/events
|
https://github.com/ollama/ollama/issues/6211
| 2,451,805,566
|
I_kwDOJ0Z1Ps6SI5V-
| 6,211
|
Error: max retries exceeded
|
{
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users/igorschlum/followers",
"following_url": "https://api.github.com/users/igorschlum/following{/other_user}",
"gists_url": "https://api.github.com/users/igorschlum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/igorschlum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/igorschlum/subscriptions",
"organizations_url": "https://api.github.com/users/igorschlum/orgs",
"repos_url": "https://api.github.com/users/igorschlum/repos",
"events_url": "https://api.github.com/users/igorschlum/events{/privacy}",
"received_events_url": "https://api.github.com/users/igorschlum/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-08-06T22:42:52
| 2025-01-30T04:39:10
| 2024-08-11T23:09:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am in a place with a slow ADSL connection that works for loading pages and checking emails. However, I can't pull LLM models because it's regularly interrupted by an 'Error: max retries exceeded after about 2 or 3 minutes. If I use my phone with Share Connection, it works well in 5G. I don't think it's a new problem, as I don't often pull models from this location, but I remember that this issue was supposed to be fixed about a year ago.
(base) igor@Mac-Studio-192 ~ % ollama pull llama3.1:405b-instruct-q2_K
pulling manifest
pulling 91a950ecda13... 0% ▕ ▏ 240 MB/151 GB
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/91/91a950ecda13411e7b54f0df08b965a4ff2d38738556e89fbbe06ab5ee1a8d18/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240806%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240806T212828Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=b0480c0fd7b0c78a2204922fa1608e972c6438155b83ff03c7555a43fdc0e6b5": net/http: TLS handshake timeout
(base) igor@Mac-Studio-192 ~ % ollama pull llama3.1:405b-instruct-q2_K
pulling manifest
pulling 91a950ecda13... 0% ▕ ▏ 468 MB/151 GB
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/91/91a950ecda13411e7b54f0df08b965a4ff2d38738556e89fbbe06ab5ee1a8d18/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240806%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240806T213331Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=12d37fed240dd1f741823322bb3718b46ca05e4f78eabaf4fc69054275592cd0": net/http: TLS handshake timeout
(base) igor@Mac-Studio-192 ~ % ollama pull llama3.1:405b-instruct-q2_K
pulling manifest
pulling 91a950ecda13... 0% ▕ ▏ 709 MB/151 GB
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/91/91a950ecda13411e7b54f0df08b965a4ff2d38738556e89fbbe06ab5ee1a8d18/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240806%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240806T213930Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=2ba7b7e37ec8436dde00315d61e53fabc7bb784a4b2360a6c5f6b19e54e362e1": net/http: TLS handshake timeout
(base) igor@Mac-Studio-192 ~ % ollama pull llama3.1:405b-instruct-q2_K
pulling manifest
pulling 91a950ecda13... 1% ▕ ▏ 950 MB/151 GB
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/91/91a950ecda13411e7b54f0df08b965a4ff2d38738556e89fbbe06ab5ee1a8d18/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240806%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240806T214526Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=e58417cf631b6443136f09b1dc99c131059a7021b712ec938a6c523426e56801": net/http: TLS handshake timeout
(base) igor@Mac-Studio-192 ~ % ollama pull llama3.1:405b-instruct-q2_K
pulling manifest
pulling 91a950ecda13... 1% ▕ ▏ 1.1 GB/151 GB 316 KB/s 99h+
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/91/91a950ecda13411e7b54f0df08b965a4ff2d38738556e89fbbe06ab5ee1a8d18/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240806%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240806T214937Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d0551598d8332b8fa7680e42c0bbf2298d9ea0d78afeeb1042c681b7c4563b64": net/http: TLS handshake timeout
(base) igor@Mac-Studio-192 ~ % ollama pull llama3.1:405b-instruct-q2_K
pulling manifest
pulling 91a950ecda13... 1% ▕ ▏ 1.4 GB/151 GB 1.3 MB/s 31h26m
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/91/91a950ecda13411e7b54f0df08b965a4ff2d38738556e89fbbe06ab5ee1a8d18/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240806%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240806T215955Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=410db491fc93079799905b740c91755269e7daf54433e3be6ae6a3e57f338209": net/http: TLS handshake timeout
(base) igor@Mac-Studio-192 ~ % ollama pull llama3.1:405b-instruct-q2_K
pulling manifest
pulling 91a950ecda13... 1% ▕ ▏ 1.7 GB/151 GB
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/91/91a950ecda13411e7b54f0df08b965a4ff2d38738556e89fbbe06ab5ee1a8d18/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20240806%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20240806T222208Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=bc366a803f1e3a7a717b3fd2ab54d31e6804057a0e4d0c4c77f5324d83cec398": read tcp 192.168.1.30:55654->104.18.9.90:443: read: connection reset by peer
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.4
|
{
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users/igorschlum/followers",
"following_url": "https://api.github.com/users/igorschlum/following{/other_user}",
"gists_url": "https://api.github.com/users/igorschlum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/igorschlum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/igorschlum/subscriptions",
"organizations_url": "https://api.github.com/users/igorschlum/orgs",
"repos_url": "https://api.github.com/users/igorschlum/repos",
"events_url": "https://api.github.com/users/igorschlum/events{/privacy}",
"received_events_url": "https://api.github.com/users/igorschlum/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6211/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5253
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5253/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5253/comments
|
https://api.github.com/repos/ollama/ollama/issues/5253/events
|
https://github.com/ollama/ollama/issues/5253
| 2,369,830,728
|
I_kwDOJ0Z1Ps6NQL9I
| 5,253
|
Add queue position indicator
|
{
"login": "uzumakinaruto19",
"id": 99479748,
"node_id": "U_kgDOBe3wxA",
"avatar_url": "https://avatars.githubusercontent.com/u/99479748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uzumakinaruto19",
"html_url": "https://github.com/uzumakinaruto19",
"followers_url": "https://api.github.com/users/uzumakinaruto19/followers",
"following_url": "https://api.github.com/users/uzumakinaruto19/following{/other_user}",
"gists_url": "https://api.github.com/users/uzumakinaruto19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uzumakinaruto19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uzumakinaruto19/subscriptions",
"organizations_url": "https://api.github.com/users/uzumakinaruto19/orgs",
"repos_url": "https://api.github.com/users/uzumakinaruto19/repos",
"events_url": "https://api.github.com/users/uzumakinaruto19/events{/privacy}",
"received_events_url": "https://api.github.com/users/uzumakinaruto19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-06-24T10:15:29
| 2024-11-06T01:17:42
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, when running resource-intensive models on Ollama, especially on less powerful hardware, it's not clear how long processing might take or if there's a queue of tasks.
Feature request:
1. Implement a way to show the user's position in the processing queue (if any). This is my main concern
2. Add an option to display estimated time until processing begins or completes.
This feature would be beneficial for:
- Users running large models on consumer-grade hardware
- Understanding and managing processing times
- Improving user experience by providing more information about task status
Possible implementation ideas:
- Add a new command like `ollama status` to show current queue position and estimates
- Include this information in verbose output modes
- Optionally display this info in the command-line interface during model runs
- it will be great if we can get that from API call or some thing
Has this been considered before? It would greatly enhance the user experience when working with more demanding models or on systems with limited resources.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5253/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/5253/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6665
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6665/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6665/comments
|
https://api.github.com/repos/ollama/ollama/issues/6665/events
|
https://github.com/ollama/ollama/pull/6665
| 2,509,085,392
|
PR_kwDOJ0Z1Ps56mGN8
| 6,665
|
Fix "presence_penalty_penalty" typo, add test.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-09-06T00:04:35
| 2024-09-06T17:07:31
| 2024-09-06T08:16:28
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6665",
"html_url": "https://github.com/ollama/ollama/pull/6665",
"diff_url": "https://github.com/ollama/ollama/pull/6665.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6665.patch",
"merged_at": "2024-09-06T08:16:28"
}
|
Fixes: https://github.com/ollama/ollama/issues/6640
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6665/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2273
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2273/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2273/comments
|
https://api.github.com/repos/ollama/ollama/issues/2273/events
|
https://github.com/ollama/ollama/issues/2273
| 2,107,498,277
|
I_kwDOJ0Z1Ps59nd8l
| 2,273
|
Line breaks are stripped when pasting to the prompt when running under WezTerm
|
{
"login": "eproxus",
"id": 112878,
"node_id": "MDQ6VXNlcjExMjg3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/112878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eproxus",
"html_url": "https://github.com/eproxus",
"followers_url": "https://api.github.com/users/eproxus/followers",
"following_url": "https://api.github.com/users/eproxus/following{/other_user}",
"gists_url": "https://api.github.com/users/eproxus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eproxus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eproxus/subscriptions",
"organizations_url": "https://api.github.com/users/eproxus/orgs",
"repos_url": "https://api.github.com/users/eproxus/repos",
"events_url": "https://api.github.com/users/eproxus/events{/privacy}",
"received_events_url": "https://api.github.com/users/eproxus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-01-30T10:35:41
| 2024-03-11T22:29:22
| 2024-03-11T22:29:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
(Not sure if this is a Ollama / WezTerm issue, but opening it here first)
When pasting multi-line text into the prompt when running Ollama under the WezTerm terminal on macOS, line breaks (newlines) are stripped. This does not happen with Terminal.app. It also doesn't happen in e.g. Vim, so it is something specific with the interaction between Ollama and WezTerm.
Versions
* Ollama: 0.1.22
* WezTerm: 20240128-202157-1e552d76
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2273/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5274
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5274/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5274/comments
|
https://api.github.com/repos/ollama/ollama/issues/5274/events
|
https://github.com/ollama/ollama/issues/5274
| 2,372,982,772
|
I_kwDOJ0Z1Ps6NcNf0
| 5,274
|
API works with non-functional params, no error messages
|
{
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-kleine/followers",
"following_url": "https://api.github.com/users/d-kleine/following{/other_user}",
"gists_url": "https://api.github.com/users/d-kleine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-kleine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-kleine/subscriptions",
"organizations_url": "https://api.github.com/users/d-kleine/orgs",
"repos_url": "https://api.github.com/users/d-kleine/repos",
"events_url": "https://api.github.com/users/d-kleine/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-kleine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-25T15:26:02
| 2024-11-06T01:16:03
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The API should only accept the parameters `"model"`, "messages" and "options", but there won't be no error messages displayed if there are also non-functional params, like in this case `"seed"` or `"temperature"`:
```python
def query_model(prompt, model="llama3", url="http://localhost:11434/api/chat"):
# Create the data payload as a dictionary
data = {
"model": model,
"seed": 123, # for deterministic responses
"temperature": 0, # for deterministic responses
"messages": [
{"role": "user", "content": prompt}
]
}
```
It would be great if error messages could be displayed if there a non-working params defined in the API request.
### OS
Linux, Windows, Docker, WSL2
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.45
|
{
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-kleine/followers",
"following_url": "https://api.github.com/users/d-kleine/following{/other_user}",
"gists_url": "https://api.github.com/users/d-kleine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-kleine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-kleine/subscriptions",
"organizations_url": "https://api.github.com/users/d-kleine/orgs",
"repos_url": "https://api.github.com/users/d-kleine/repos",
"events_url": "https://api.github.com/users/d-kleine/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-kleine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5274/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/5135
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5135/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5135/comments
|
https://api.github.com/repos/ollama/ollama/issues/5135/events
|
https://github.com/ollama/ollama/issues/5135
| 2,361,411,481
|
I_kwDOJ0Z1Ps6MwEeZ
| 5,135
|
HOW CAN I CHANGE THE PORT OLLAMA SERVE USES
|
{
"login": "Udacv",
"id": 126667614,
"node_id": "U_kgDOB4zLXg",
"avatar_url": "https://avatars.githubusercontent.com/u/126667614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Udacv",
"html_url": "https://github.com/Udacv",
"followers_url": "https://api.github.com/users/Udacv/followers",
"following_url": "https://api.github.com/users/Udacv/following{/other_user}",
"gists_url": "https://api.github.com/users/Udacv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Udacv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Udacv/subscriptions",
"organizations_url": "https://api.github.com/users/Udacv/orgs",
"repos_url": "https://api.github.com/users/Udacv/repos",
"events_url": "https://api.github.com/users/Udacv/events{/privacy}",
"received_events_url": "https://api.github.com/users/Udacv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-19T06:10:16
| 2024-06-19T14:57:32
| 2024-06-19T14:57:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
My port 11434 is occupied. I wonder how can I change one?
I've tried "OLLAMA_HOST=127.0.0.1:11435 ollama serve", but my cmd cannot understand.
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.44
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5135/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5725
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5725/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5725/comments
|
https://api.github.com/repos/ollama/ollama/issues/5725/events
|
https://github.com/ollama/ollama/issues/5725
| 2,411,484,240
|
I_kwDOJ0Z1Ps6PvFRQ
| 5,725
|
Mistral Codestral Mamba 7B
|
{
"login": "lestan",
"id": 1471736,
"node_id": "MDQ6VXNlcjE0NzE3MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1471736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lestan",
"html_url": "https://github.com/lestan",
"followers_url": "https://api.github.com/users/lestan/followers",
"following_url": "https://api.github.com/users/lestan/following{/other_user}",
"gists_url": "https://api.github.com/users/lestan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lestan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lestan/subscriptions",
"organizations_url": "https://api.github.com/users/lestan/orgs",
"repos_url": "https://api.github.com/users/lestan/repos",
"events_url": "https://api.github.com/users/lestan/events{/privacy}",
"received_events_url": "https://api.github.com/users/lestan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 16
| 2024-07-16T15:32:47
| 2024-11-07T16:34:46
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://mistral.ai/news/codestral-mamba/
The latest model from Mistral utilizes the Mamba architecture (vs. Transformers) and targets code generation with strong performance on the leaderboards.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5725/reactions",
"total_count": 73,
"+1": 68,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5725/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/818
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/818/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/818/comments
|
https://api.github.com/repos/ollama/ollama/issues/818/events
|
https://github.com/ollama/ollama/pull/818
| 1,947,368,046
|
PR_kwDOJ0Z1Ps5dA7jP
| 818
|
Fix a typo
|
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-17T12:58:34
| 2023-10-17T13:00:16
| 2023-10-17T13:00:16
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/818",
"html_url": "https://github.com/ollama/ollama/pull/818",
"diff_url": "https://github.com/ollama/ollama/pull/818.diff",
"patch_url": "https://github.com/ollama/ollama/pull/818.patch",
"merged_at": "2023-10-17T13:00:16"
}
|
The word in the JSON response is `embedding` not `embeddings`:
```sh
curl -X POST http://localhost:11434/api/embeddings -d '{
"model": "codeup:latest",
"prompt": "Here is an article about llamas..."
}'
```
```json
{"embedding":[-1.3911274671554565,0.045920971781015396,1.0808414220809937,0.058245059102773666,-0.27932560443878174,-0.1968495100736618,1.1102352142333984,0.9859555959701538,0.9562729597091675,-0.19171573221683502,0.16944187879562378,-0.5829504132270813,0.19427405 ...
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/818/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7456
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7456/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7456/comments
|
https://api.github.com/repos/ollama/ollama/issues/7456/events
|
https://github.com/ollama/ollama/pull/7456
| 2,627,815,269
|
PR_kwDOJ0Z1Ps6AklaG
| 7,456
|
update llama3.2 vision memory estimation
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-31T20:58:59
| 2024-11-04T17:48:45
| 2024-11-04T17:48:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7456",
"html_url": "https://github.com/ollama/ollama/pull/7456",
"diff_url": "https://github.com/ollama/ollama/pull/7456.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7456.patch",
"merged_at": "2024-11-04T17:48:43"
}
|
adjust estimations for mllama which has conditional graph components and a different cache shape
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7456/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5087
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5087/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5087/comments
|
https://api.github.com/repos/ollama/ollama/issues/5087/events
|
https://github.com/ollama/ollama/issues/5087
| 2,355,977,854
|
I_kwDOJ0Z1Ps6MbV5-
| 5,087
|
Qwen2 "GGGG" issue is back in version 0.1.44
|
{
"login": "Speedway1",
"id": 100301611,
"node_id": "U_kgDOBfp7Kw",
"avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Speedway1",
"html_url": "https://github.com/Speedway1",
"followers_url": "https://api.github.com/users/Speedway1/followers",
"following_url": "https://api.github.com/users/Speedway1/following{/other_user}",
"gists_url": "https://api.github.com/users/Speedway1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Speedway1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Speedway1/subscriptions",
"organizations_url": "https://api.github.com/users/Speedway1/orgs",
"repos_url": "https://api.github.com/users/Speedway1/repos",
"events_url": "https://api.github.com/users/Speedway1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Speedway1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2024-06-16T21:03:11
| 2024-08-06T12:46:33
| 2024-07-07T14:26:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Qwen2 70B outputs a series of Gs or else random garbage. However Qwen2 7b, which fits on one card only, works fine. It seems that when Ollama needs to spread across 2 GPU cards, it doesn't work.
For example:
```
ollama@TH-AI2:~$ ollama run qwen2:72b
>>> Tell me a sotry about a bird and a tree who loved each other
25 and789320 and,
1202 164_39 a1.13 2019 the
,096
2,,',
314 is
```
$ ollama -v
ollama version is 0.1.44
llamacpp@TH-AI2:~$ /opt/rocm/bin/rocm-smi
========================================== ROCm System Management Interface ==========================================
==================================================== Concise Info ====================================================
Device [Model : Revision] Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU%
Name (20 chars) (Edge) (Avg) (Mem, Compute)
======================================================================================================================
0 [0x5304 : 0xc8] 47.0°C 79.0W N/A, N/A 189Mhz 1249Mhz 20.0% auto 327.0W 86% 9%
0x744c
1 [0x5304 : 0xc8] 48.0°C 81.0W N/A, N/A 228Mhz 1249Mhz 20.0% auto 327.0W 85% 9%
0x744c
2 [0x8877 : 0xc3] 36.0°C 9.155W N/A, N/A None 1800Mhz 0% auto Unsupported 15% 0%
0x164e
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.44
|
{
"login": "Speedway1",
"id": 100301611,
"node_id": "U_kgDOBfp7Kw",
"avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Speedway1",
"html_url": "https://github.com/Speedway1",
"followers_url": "https://api.github.com/users/Speedway1/followers",
"following_url": "https://api.github.com/users/Speedway1/following{/other_user}",
"gists_url": "https://api.github.com/users/Speedway1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Speedway1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Speedway1/subscriptions",
"organizations_url": "https://api.github.com/users/Speedway1/orgs",
"repos_url": "https://api.github.com/users/Speedway1/repos",
"events_url": "https://api.github.com/users/Speedway1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Speedway1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5087/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7560
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7560/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7560/comments
|
https://api.github.com/repos/ollama/ollama/issues/7560/events
|
https://github.com/ollama/ollama/pull/7560
| 2,641,647,928
|
PR_kwDOJ0Z1Ps6BN4Ns
| 7,560
|
Be explicit for gpu library link dir
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-11-07T17:01:22
| 2024-11-08T23:35:14
| 2024-11-07T17:20:40
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7560",
"html_url": "https://github.com/ollama/ollama/pull/7560",
"diff_url": "https://github.com/ollama/ollama/pull/7560.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7560.patch",
"merged_at": "2024-11-07T17:20:40"
}
|
On linux nvcc isn't automatically linking to the same cuda version.
Fixes #7546
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7560/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5311
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5311/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5311/comments
|
https://api.github.com/repos/ollama/ollama/issues/5311/events
|
https://github.com/ollama/ollama/pull/5311
| 2,376,313,867
|
PR_kwDOJ0Z1Ps5zr_sI
| 5,311
|
Update OpenAI Compatibility Docs with /v1/completions
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-26T21:31:03
| 2024-08-02T20:16:25
| 2024-08-02T20:16:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5311",
"html_url": "https://github.com/ollama/ollama/pull/5311",
"diff_url": "https://github.com/ollama/ollama/pull/5311.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5311.patch",
"merged_at": "2024-08-02T20:16:23"
}
|
Referencing #5209
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5311/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4444
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4444/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4444/comments
|
https://api.github.com/repos/ollama/ollama/issues/4444/events
|
https://github.com/ollama/ollama/issues/4444
| 2,296,788,944
|
I_kwDOJ0Z1Ps6I5jfQ
| 4,444
|
Add tab completions for fish shell
|
{
"login": "coder543",
"id": 726063,
"node_id": "MDQ6VXNlcjcyNjA2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/726063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coder543",
"html_url": "https://github.com/coder543",
"followers_url": "https://api.github.com/users/coder543/followers",
"following_url": "https://api.github.com/users/coder543/following{/other_user}",
"gists_url": "https://api.github.com/users/coder543/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coder543/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coder543/subscriptions",
"organizations_url": "https://api.github.com/users/coder543/orgs",
"repos_url": "https://api.github.com/users/coder543/repos",
"events_url": "https://api.github.com/users/coder543/events{/privacy}",
"received_events_url": "https://api.github.com/users/coder543/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-15T03:31:52
| 2024-05-15T03:31:52
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This is a little something I worked up (with some help :robot:) to make my life easier as a `fish` user:
`~/.config/fish/completions/ollama.fish`
```fish
function __ollama_list
set -l query (string join ' ' $argv)
ollama list $query | awk 'NR > 1 { gsub(/:latest$/, "", $1); print $1 }'
end
# Complete subcommands for ollama with descriptions
complete -c ollama -n '__fish_use_subcommand' -f -a "serve" -d "Start ollama"
complete -c ollama -n '__fish_use_subcommand' -f -a "create" -d "Create a model from a Modelfile"
complete -c ollama -n '__fish_use_subcommand' -f -a "show" -d "Show information for a model"
complete -c ollama -n '__fish_use_subcommand' -f -a "run" -d "Run a model"
complete -c ollama -n '__fish_use_subcommand' -f -a "pull" -d "Pull a model from a registry"
complete -c ollama -n '__fish_use_subcommand' -f -a "push" -d "Push a model to a registry"
complete -c ollama -n '__fish_use_subcommand' -f -a "list ls" -d "List models"
complete -c ollama -n '__fish_use_subcommand' -f -a "ps" -d "List running models"
complete -c ollama -n '__fish_use_subcommand' -f -a "cp" -d "Copy a model"
complete -c ollama -n '__fish_use_subcommand' -f -a "rm" -d "Remove a model"
complete -c ollama -n '__fish_use_subcommand' -f -a "help" -d "Help about any command"
# Add --help flag for all subcommands
for subcmd in serve create show run pull push list ls ps cp rm help
complete -c ollama -n "__fish_seen_subcommand_from $subcmd" -l help -s h -d "Help for $subcmd"
end
# Complete options for ollama create command
complete -c ollama -n '__fish_seen_subcommand_from create' -l file -s f -d 'Name of the Modelfile (default "Modelfile")'
complete -c ollama -n '__fish_seen_subcommand_from create' -l quantize -s q -d 'Quantize model to this level (e.g. q4_0)'
# Complete options for ollama show command
complete -c ollama -n '__fish_seen_subcommand_from show' -l license -d 'Show license of a model'
complete -c ollama -n '__fish_seen_subcommand_from show' -l modelfile -d 'Show Modelfile of a model'
complete -c ollama -n '__fish_seen_subcommand_from show' -l parameters -d 'Show parameters of a model'
complete -c ollama -n '__fish_seen_subcommand_from show' -l system -d 'Show system message of a model'
complete -c ollama -n '__fish_seen_subcommand_from show' -l template -d 'Show template of a model'
# Complete options for ollama pull command
complete -c ollama -n '__fish_seen_subcommand_from pull' -l insecure -d 'Use an insecure registry'
# Complete options for ollama push command
complete -c ollama -n '__fish_seen_subcommand_from push' -l insecure -d 'Use an insecure registry'
# Complete options for ollama list command
complete -c ollama -n '__fish_seen_subcommand_from list ls' -l help -s h -d 'Help for list'
# Complete options for ollama run command
complete -c ollama -n '__fish_seen_subcommand_from run' -l format -d 'Response format (e.g. json)'
complete -c ollama -n '__fish_seen_subcommand_from run' -l insecure -d 'Use an insecure registry'
complete -c ollama -n '__fish_seen_subcommand_from run' -l keepalive -d 'Duration to keep a model loaded (e.g. 5m)'
complete -c ollama -n '__fish_seen_subcommand_from run' -l nowordwrap -d "Don't wrap words to the next line automatically"
complete -c ollama -n '__fish_seen_subcommand_from run' -l verbose -d 'Show timings for response'
# Complete the model names for ollama show, push, rm, run, and cp commands
complete -c ollama -n '__fish_seen_subcommand_from run' -f -a '(__ollama_list (commandline -ct))'
complete -c ollama -n '__fish_seen_subcommand_from show' -f -a '(__ollama_list (commandline -ct))'
complete -c ollama -n '__fish_seen_subcommand_from push' -f -a '(__ollama_list (commandline -ct))'
complete -c ollama -n '__fish_seen_subcommand_from rm' -f -a '(__ollama_list (commandline -ct))'
complete -c ollama -n '__fish_seen_subcommand_from cp' -f -a '(__ollama_list (commandline -ct))'
```
Now, I can press tab while typing `ollama` commands, and I will get helpful suggestions. It uses `ollama list` to provide completions for model names, which is especially helpful. It also provides the more natural suggestion of just the `model-name` instead of `model-name:latest` when dealing with `:latest` tags.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4444/reactions",
"total_count": 6,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4444/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8653
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8653/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8653/comments
|
https://api.github.com/repos/ollama/ollama/issues/8653/events
|
https://github.com/ollama/ollama/issues/8653
| 2,817,878,044
|
I_kwDOJ0Z1Ps6n9Wgc
| 8,653
|
Latest pre-built Ollama binaries (cuda 12.x) do not come with "oob" support for 5.x architecture
|
{
"login": "RKouchoo",
"id": 19159026,
"node_id": "MDQ6VXNlcjE5MTU5MDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/19159026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RKouchoo",
"html_url": "https://github.com/RKouchoo",
"followers_url": "https://api.github.com/users/RKouchoo/followers",
"following_url": "https://api.github.com/users/RKouchoo/following{/other_user}",
"gists_url": "https://api.github.com/users/RKouchoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RKouchoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RKouchoo/subscriptions",
"organizations_url": "https://api.github.com/users/RKouchoo/orgs",
"repos_url": "https://api.github.com/users/RKouchoo/repos",
"events_url": "https://api.github.com/users/RKouchoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/RKouchoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2025-01-29T11:00:37
| 2025-01-29T23:55:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### "oob" support for 5.x architecture is missing on prebuilt binaries
Hello,
I ended up needing some more power so I threw a spare Quadro M5000 into my AI rig only to find it was not being utilsed at all. I did the usual checks and the card has compute capability 5.2 (confirmed compatible in the support matrix [here](https://github.com/ollama/ollama/blob/main/docs/gpu.md)).
As initial troubleshooting steps (bouncing ideas from other issues posted here) I tried:
- Manually passing through the UUID of all GPU's via `CUDA_VISIBLE_DEVICES` gvar. ollama would acknowledge this in the logs but would never use the card anway. There was no log message complaining about compute capability or mention of dropping the card.
- Setting `OLLAMA_SCHED_SPREAD` gvar to `true`
I found that the ollama install script also grabbed **cuda 11.x by default**, but during the installation the GPU's I had installed were a pair of 20GB RTX4000 "Ada" generation + Aspeed AST2500 IPMI/VGA. I also had the 565 driver installed before setting everything else up, it quotes that its built with cuda 12.6, I don't quite understand why the installer would grab the 11.x toolkit.
During a dive through the repo to see what I could find, I noticed the make config file for cuda_v12 ([here](https://github.com/ollama/ollama/blob/main/make/Makefile.cuda_v12)) does not include 5.0/5.2 support by default but it could. I also found that in the current release notes for cuda 12.8 - Maxwell, Pascal and Volta architectures will be "frozen" (depricated?) in future releases of the cuda toolkit [here](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#deprecated-architectures).
I managed to build everything fine with the cuda 12.8 toolkit and confirmed ollama works as I expected, here are the build flags I used:
`make cuda_v12 CUDA_ARCHITECTURES="50;52;60;61;62;70;72;75;80;86;87;89;90;90a" -j 32`
I know that there is still heaps of Maxwell (5.2) cards still floating around in systems, people on a budget will definitely try use them with the hype of the recent model releases as they are capable of running them locally to an extent. I believe either the docs need an update or binaries should be compiled with support built in until theres an official notice or documentation change to avoid confusion.
Apologies if I am wrong, I thought I would post this here before opening a pull request incase there was anything already in motion related to this.
Cheers,
RK
### OS
Linux - Ubuntu 22.04
### GPU
Nvidia - RTX A4000 "Ada" x2, RTX 4070 & Quadro M5000
### CPU
2x AMD EPYC 7371
### Ollama version
0.5.7 (latest install script)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8653/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6668
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6668/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6668/comments
|
https://api.github.com/repos/ollama/ollama/issues/6668/events
|
https://github.com/ollama/ollama/issues/6668
| 2,509,520,109
|
I_kwDOJ0Z1Ps6VlDzt
| 6,668
|
Every installed model disappeared
|
{
"login": "yilmaz08",
"id": 84680978,
"node_id": "MDQ6VXNlcjg0NjgwOTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/84680978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yilmaz08",
"html_url": "https://github.com/yilmaz08",
"followers_url": "https://api.github.com/users/yilmaz08/followers",
"following_url": "https://api.github.com/users/yilmaz08/following{/other_user}",
"gists_url": "https://api.github.com/users/yilmaz08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yilmaz08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yilmaz08/subscriptions",
"organizations_url": "https://api.github.com/users/yilmaz08/orgs",
"repos_url": "https://api.github.com/users/yilmaz08/repos",
"events_url": "https://api.github.com/users/yilmaz08/events{/privacy}",
"received_events_url": "https://api.github.com/users/yilmaz08/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-09-06T05:00:45
| 2024-09-10T20:15:37
| 2024-09-07T07:10:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After opening my pc today, I've realized that I was not able to use any ollama models. The ollama daemon is running but `ollama ls` doesn't show anything. I tried reinstalling llama3.1:8b and it works.
Somehow every installed model disappeared and I need to reinstall all of them. (It is not a huge problem for me but I wanted to report it)
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.9
|
{
"login": "yilmaz08",
"id": 84680978,
"node_id": "MDQ6VXNlcjg0NjgwOTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/84680978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yilmaz08",
"html_url": "https://github.com/yilmaz08",
"followers_url": "https://api.github.com/users/yilmaz08/followers",
"following_url": "https://api.github.com/users/yilmaz08/following{/other_user}",
"gists_url": "https://api.github.com/users/yilmaz08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yilmaz08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yilmaz08/subscriptions",
"organizations_url": "https://api.github.com/users/yilmaz08/orgs",
"repos_url": "https://api.github.com/users/yilmaz08/repos",
"events_url": "https://api.github.com/users/yilmaz08/events{/privacy}",
"received_events_url": "https://api.github.com/users/yilmaz08/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6668/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8018
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8018/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8018/comments
|
https://api.github.com/repos/ollama/ollama/issues/8018/events
|
https://github.com/ollama/ollama/pull/8018
| 2,728,401,283
|
PR_kwDOJ0Z1Ps6EnMnB
| 8,018
|
api: change /delete endpoint to use POST method
|
{
"login": "nguu0123",
"id": 80659317,
"node_id": "MDQ6VXNlcjgwNjU5MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/80659317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nguu0123",
"html_url": "https://github.com/nguu0123",
"followers_url": "https://api.github.com/users/nguu0123/followers",
"following_url": "https://api.github.com/users/nguu0123/following{/other_user}",
"gists_url": "https://api.github.com/users/nguu0123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nguu0123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nguu0123/subscriptions",
"organizations_url": "https://api.github.com/users/nguu0123/orgs",
"repos_url": "https://api.github.com/users/nguu0123/repos",
"events_url": "https://api.github.com/users/nguu0123/events{/privacy}",
"received_events_url": "https://api.github.com/users/nguu0123/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-12-09T22:16:54
| 2024-12-12T19:33:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8018",
"html_url": "https://github.com/ollama/ollama/pull/8018",
"diff_url": "https://github.com/ollama/ollama/pull/8018.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8018.patch",
"merged_at": null
}
|
PR for #7985
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8018/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2287
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2287/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2287/comments
|
https://api.github.com/repos/ollama/ollama/issues/2287/events
|
https://github.com/ollama/ollama/issues/2287
| 2,109,983,610
|
I_kwDOJ0Z1Ps59w8t6
| 2,287
|
List of embedding models supported by Ollama
|
{
"login": "bm777",
"id": 29865600,
"node_id": "MDQ6VXNlcjI5ODY1NjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/29865600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bm777",
"html_url": "https://github.com/bm777",
"followers_url": "https://api.github.com/users/bm777/followers",
"following_url": "https://api.github.com/users/bm777/following{/other_user}",
"gists_url": "https://api.github.com/users/bm777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bm777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bm777/subscriptions",
"organizations_url": "https://api.github.com/users/bm777/orgs",
"repos_url": "https://api.github.com/users/bm777/repos",
"events_url": "https://api.github.com/users/bm777/events{/privacy}",
"received_events_url": "https://api.github.com/users/bm777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-01-31T12:24:55
| 2024-02-20T04:06:51
| 2024-02-20T04:06:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
What can we do to get the list of models in Ollama for the embedding support?
For example, if I want to serve a Bert model from the SBERT hg repo, how can I do it?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2287/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2287/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4306
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4306/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4306/comments
|
https://api.github.com/repos/ollama/ollama/issues/4306/events
|
https://github.com/ollama/ollama/pull/4306
| 2,288,688,636
|
PR_kwDOJ0Z1Ps5vCyvV
| 4,306
|
fix(routes): skip bad manifests
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-10T00:45:27
| 2024-05-10T15:58:16
| 2024-05-10T15:58:16
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4306",
"html_url": "https://github.com/ollama/ollama/pull/4306",
"diff_url": "https://github.com/ollama/ollama/pull/4306.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4306.patch",
"merged_at": "2024-05-10T15:58:16"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4306/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1367
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1367/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1367/comments
|
https://api.github.com/repos/ollama/ollama/issues/1367/events
|
https://github.com/ollama/ollama/issues/1367
| 2,022,638,274
|
I_kwDOJ0Z1Ps54jwLC
| 1,367
|
Starling-lm default prompt template is incorrect
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-12-03T17:40:02
| 2024-03-12T21:29:41
| 2024-03-12T21:29:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I tried the experiment with
`repeat this word forever "poem poem poem poem"`
which has been known to cause chat gtp to spit out it's training data.
On Alfred it said "poem poem poem poem <end_reponse" (no ending angle bracket
on DeepSeek-Coder it said
```python
while True:
print("poem poem poem poem")
```
which I thought was a good answer.
however on starling-lm
it started writing out poem over and over and eventually started spiting out training data.
Looking on https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha/discussions/18
They suggest for a user with a similar problem that the cause is the prompt,
"Starling is finetuned from openchat 3.5, which has a very special chat prompt, which goes as: "GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:"
Wondering if Ollama is making the same mistake?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1367/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6088
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6088/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6088/comments
|
https://api.github.com/repos/ollama/ollama/issues/6088/events
|
https://github.com/ollama/ollama/issues/6088
| 2,439,026,838
|
I_kwDOJ0Z1Ps6RYJiW
| 6,088
|
Ollama运行Huggingface中下载的sqlcoder-34b-alpha模型报错:error loading model: vocab size mismatch
|
{
"login": "Crazyisme",
"id": 15233702,
"node_id": "MDQ6VXNlcjE1MjMzNzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/15233702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Crazyisme",
"html_url": "https://github.com/Crazyisme",
"followers_url": "https://api.github.com/users/Crazyisme/followers",
"following_url": "https://api.github.com/users/Crazyisme/following{/other_user}",
"gists_url": "https://api.github.com/users/Crazyisme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Crazyisme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Crazyisme/subscriptions",
"organizations_url": "https://api.github.com/users/Crazyisme/orgs",
"repos_url": "https://api.github.com/users/Crazyisme/repos",
"events_url": "https://api.github.com/users/Crazyisme/events{/privacy}",
"received_events_url": "https://api.github.com/users/Crazyisme/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-07-31T03:21:50
| 2024-07-31T03:21:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
执行命令:ollama run sqlcoder-34b-Q4_K_M
报错信息:
llama_model_load: error loading model: vocab size mismatch
llama_load_model_from_file: exception loading model
terminate called after throwing an instance of 'std::runtime_error'
what(): vocab size mismatch
前置信息:通过llamap.cpp 执行了命令:
1. python ./convert_hf_to_gguf.py /home/user/datadisk-largemodel/sqlcoder-34b-alpha/ --outfile /home/user/datadisk-largemodel/sqlcoder-34b-alpha/sqlcoder-34b.gguf
2. ./llama-quantize /home/user/datadisk-largemodel/sqlcoder-34b-alpha/sqlcoder-34b.gguf /home/user/datadisk-largemodel/sqlcoder-34b-alpha/sqlcoder-34b-Q4_K_M.gguf Q4_K_M
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.3.0
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6088/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3363
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3363/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3363/comments
|
https://api.github.com/repos/ollama/ollama/issues/3363/events
|
https://github.com/ollama/ollama/pull/3363
| 2,209,388,838
|
PR_kwDOJ0Z1Ps5q2l7k
| 3,363
|
Detect arrow keys on windows
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-26T21:45:04
| 2024-03-26T22:21:57
| 2024-03-26T22:21:56
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3363",
"html_url": "https://github.com/ollama/ollama/pull/3363",
"diff_url": "https://github.com/ollama/ollama/pull/3363.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3363.patch",
"merged_at": "2024-03-26T22:21:56"
}
|
Also simplifies to use the `golang.org/x/sys/windows` package. Note: this could be simplified more using the `x/term` package as well on the unix side of things, but I kept this small to fix windows first
Fixes https://github.com/ollama/ollama/issues/2639
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3363/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5340
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5340/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5340/comments
|
https://api.github.com/repos/ollama/ollama/issues/5340/events
|
https://github.com/ollama/ollama/pull/5340
| 2,378,934,236
|
PR_kwDOJ0Z1Ps5z0FtB
| 5,340
|
gemma2 graph
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-27T19:23:22
| 2024-06-27T21:26:50
| 2024-06-27T21:26:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5340",
"html_url": "https://github.com/ollama/ollama/pull/5340",
"diff_url": "https://github.com/ollama/ollama/pull/5340.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5340.patch",
"merged_at": "2024-06-27T21:26:49"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5340/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4207
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4207/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4207/comments
|
https://api.github.com/repos/ollama/ollama/issues/4207/events
|
https://github.com/ollama/ollama/issues/4207
| 2,281,579,029
|
I_kwDOJ0Z1Ps6H_iIV
| 4,207
|
mxbai-embed-large embedding not consistent with original paper
|
{
"login": "deadbeef84",
"id": 961178,
"node_id": "MDQ6VXNlcjk2MTE3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/961178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deadbeef84",
"html_url": "https://github.com/deadbeef84",
"followers_url": "https://api.github.com/users/deadbeef84/followers",
"following_url": "https://api.github.com/users/deadbeef84/following{/other_user}",
"gists_url": "https://api.github.com/users/deadbeef84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deadbeef84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deadbeef84/subscriptions",
"organizations_url": "https://api.github.com/users/deadbeef84/orgs",
"repos_url": "https://api.github.com/users/deadbeef84/repos",
"events_url": "https://api.github.com/users/deadbeef84/events{/privacy}",
"received_events_url": "https://api.github.com/users/deadbeef84/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 15
| 2024-05-06T19:21:37
| 2024-07-24T07:44:43
| 2024-06-09T01:47:11
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm trying to use embeddings from `mxbai-embed-large` to create a similarity/semantic search functionality, but the quality of the embeddings coming from ollama doesn't seem to be very good.
I've tried replicating the numbers from [the original blog post](https://www.mixedbread.ai/blog/mxbai-embed-large-v1):
```js
import { Ollama } from 'ollama'
import cosineSimilarity from 'compute-cosine-similarity'
const ollama = new Ollama({ host: 'http://127.0.0.1:11434' })
const docs = [
'Represent this sentence for searching relevant passages: A man is eating a piece of bread',
'A man is eating food.',
'A man is eating pasta.',
'The girl is carrying a baby.',
'A man is riding a horse.',
]
const [queryEmbedding, ...embeddings] = await Promise.all(
docs.map(
async (doc) => (await ollama.embeddings({ model: 'mxbai-embed-large', prompt: doc })).embedding
)
)
const similarities = embeddings.map((e) => cosineSimilarity(queryEmbedding, e))
console.log(similarities)
```
```js
[
0.6231103528590645,
0.6258446589848462,
0.5631986516911313,
0.5891047395895846
]
```
Those numbers are nowhere close to the original numbers, and if I compare the embedding vectors they are completely different.
The [javascript implementation at huggingface](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) produces the same numbers as the original post.
### OS
Linux, Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.33
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4207/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4207/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5928
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5928/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5928/comments
|
https://api.github.com/repos/ollama/ollama/issues/5928/events
|
https://github.com/ollama/ollama/pull/5928
| 2,428,504,473
|
PR_kwDOJ0Z1Ps52ZAS2
| 5,928
|
llm: update metal/cuda rope
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-24T21:27:30
| 2024-07-24T22:25:04
| 2024-07-24T22:25:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5928",
"html_url": "https://github.com/ollama/ollama/pull/5928",
"diff_url": "https://github.com/ollama/ollama/pull/5928.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5928.patch",
"merged_at": null
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5928/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/519
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/519/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/519/comments
|
https://api.github.com/repos/ollama/ollama/issues/519/events
|
https://github.com/ollama/ollama/pull/519
| 1,893,136,644
|
PR_kwDOJ0Z1Ps5aKbjz
| 519
|
Mxyng/decode
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-12T19:35:57
| 2023-09-13T19:43:58
| 2023-09-13T19:43:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/519",
"html_url": "https://github.com/ollama/ollama/pull/519",
"diff_url": "https://github.com/ollama/ollama/pull/519.diff",
"patch_url": "https://github.com/ollama/ollama/pull/519.patch",
"merged_at": "2023-09-13T19:43:57"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/519/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3808
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3808/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3808/comments
|
https://api.github.com/repos/ollama/ollama/issues/3808/events
|
https://github.com/ollama/ollama/issues/3808
| 2,255,436,083
|
I_kwDOJ0Z1Ps6Gbzkz
| 3,808
|
Pull multiple chunks in parallel
|
{
"login": "frankhart2018",
"id": 38374913,
"node_id": "MDQ6VXNlcjM4Mzc0OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/38374913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankhart2018",
"html_url": "https://github.com/frankhart2018",
"followers_url": "https://api.github.com/users/frankhart2018/followers",
"following_url": "https://api.github.com/users/frankhart2018/following{/other_user}",
"gists_url": "https://api.github.com/users/frankhart2018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankhart2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankhart2018/subscriptions",
"organizations_url": "https://api.github.com/users/frankhart2018/orgs",
"repos_url": "https://api.github.com/users/frankhart2018/repos",
"events_url": "https://api.github.com/users/frankhart2018/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankhart2018/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-04-22T02:12:07
| 2024-04-22T23:43:05
| 2024-04-22T18:39:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am not sure if this has been proposed earlier or not, but having the capability of pulling models using multiple parallel processes would be very useful, especially for larger models which takes quite a lot of time (at least in my network bandwidth) to download. If this is accepted, I'd love to work on this feature :)
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3808/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6643
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6643/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6643/comments
|
https://api.github.com/repos/ollama/ollama/issues/6643/events
|
https://github.com/ollama/ollama/pull/6643
| 2,506,468,808
|
PR_kwDOJ0Z1Ps56dLdr
| 6,643
|
Minor Go Server Fixes
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-09-04T23:23:25
| 2024-09-04T23:51:07
| 2024-09-04T23:50:39
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6643",
"html_url": "https://github.com/ollama/ollama/pull/6643",
"diff_url": "https://github.com/ollama/ollama/pull/6643.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6643.patch",
"merged_at": "2024-09-04T23:50:38"
}
|
A few fixes to avoid surprises as we get wider testing
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6643/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5951
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5951/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5951/comments
|
https://api.github.com/repos/ollama/ollama/issues/5951/events
|
https://github.com/ollama/ollama/issues/5951
| 2,430,000,099
|
I_kwDOJ0Z1Ps6Q1tvj
| 5,951
|
chromadb not working adding collection
|
{
"login": "dominicdev",
"id": 3959917,
"node_id": "MDQ6VXNlcjM5NTk5MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3959917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dominicdev",
"html_url": "https://github.com/dominicdev",
"followers_url": "https://api.github.com/users/dominicdev/followers",
"following_url": "https://api.github.com/users/dominicdev/following{/other_user}",
"gists_url": "https://api.github.com/users/dominicdev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dominicdev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dominicdev/subscriptions",
"organizations_url": "https://api.github.com/users/dominicdev/orgs",
"repos_url": "https://api.github.com/users/dominicdev/repos",
"events_url": "https://api.github.com/users/dominicdev/events{/privacy}",
"received_events_url": "https://api.github.com/users/dominicdev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-07-25T13:37:41
| 2024-08-05T06:03:12
| 2024-08-05T06:03:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm trying to test the sample in getting [Generate embeddings](https://ollama.com/blog/embedding-models) , but the chromadb seems adding collection not working
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.1.48
|
{
"login": "dominicdev",
"id": 3959917,
"node_id": "MDQ6VXNlcjM5NTk5MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3959917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dominicdev",
"html_url": "https://github.com/dominicdev",
"followers_url": "https://api.github.com/users/dominicdev/followers",
"following_url": "https://api.github.com/users/dominicdev/following{/other_user}",
"gists_url": "https://api.github.com/users/dominicdev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dominicdev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dominicdev/subscriptions",
"organizations_url": "https://api.github.com/users/dominicdev/orgs",
"repos_url": "https://api.github.com/users/dominicdev/repos",
"events_url": "https://api.github.com/users/dominicdev/events{/privacy}",
"received_events_url": "https://api.github.com/users/dominicdev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5951/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5341
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5341/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5341/comments
|
https://api.github.com/repos/ollama/ollama/issues/5341/events
|
https://github.com/ollama/ollama/issues/5341
| 2,378,971,694
|
I_kwDOJ0Z1Ps6NzDou
| 5,341
|
Gemma 2 9B and 27B is not behaving right
|
{
"login": "jayakumark",
"id": 539851,
"node_id": "MDQ6VXNlcjUzOTg1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/539851?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayakumark",
"html_url": "https://github.com/jayakumark",
"followers_url": "https://api.github.com/users/jayakumark/followers",
"following_url": "https://api.github.com/users/jayakumark/following{/other_user}",
"gists_url": "https://api.github.com/users/jayakumark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayakumark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayakumark/subscriptions",
"organizations_url": "https://api.github.com/users/jayakumark/orgs",
"repos_url": "https://api.github.com/users/jayakumark/repos",
"events_url": "https://api.github.com/users/jayakumark/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayakumark/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 20
| 2024-06-27T19:46:50
| 2024-09-12T21:24:31
| 2024-09-12T21:24:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Try this in ollama Gemma 2 9B or 27B, it just never stops.
Give a succinct summary of the entire email conversation in not more than 40 words,
Emails To Andrew Fastow:
An 11 million dollar financial deal:
| william.giuliani@enron.com | andrew.fastow@enron.com | 2001-06-07 07:48:00 | Gentlemen: Attached is the DASH for the approval of the DPR Accelerated Put transaction. This partial divestiture allows us to put $11 million of our equity interest back to DPR Holding Company, LLC and its subsidiary, Dakota, LLC. Both entities are controlled by Chris Cline. In addition to redeeming part of our equity interest, the deal provides us 900,000 tons of coal priced below market, an option which could lead to a very profitable synfuel project, and the potential for more marketing fees from other Cline entities. The DASH has been approved and signed by RAC and JEDI II, and is now awaiting Mark Haedicke s review and approval. I wanted to give you the opportunity to review the DASH and become familiar with the provisions of the deal. If you have any questions on the transaction, feel free to contact me at (412) 490-9048. Others familiar with the deal are Mike Beyer, George McClellan, and Wayne Gresham. Thank you. Bill Giuliani
—-
Enron to form one corporate equity investment unit, Enron Principal Investments:
| steven.kean@enron.com | andrew.fastow@enron.com | 2001-06-12 02:09:00 | As we discussed… The other memo will follow shortly from Maureen McVicker (my assistant).———————- Forwarded by Steven J Kean/NA/Enron on 06/12/2001 09:08 AM —————————From: Sherri Sera/ENRON@enronXgate on 06/11/2001 04:51 PMTo: Steven J Kean/NA/Enron@Enroncc: Subject: FW: Draft of Organizational AnnouncementSteve, Kevin Garland sent this to me hoping to get Jeff s approval to send it out from the office of the chairman. Would it make sense to incorporate it into the memo you re working on? Please advise. Thanks, SRS —–Original Message—–From: Garland, Kevin Sent: Monday, June 11, 2001 3:28 PMTo: Sera, SherriSubject: FW: Draft of Organizational AnnouncementAnnouncing the Formation of One Corporate Equity Investing UnitTo better develop and manage equity investment opportunities related to our core businesses, Enron has formed one corporate equity investment unit. This new unit, Enron Principal Investments, will combine the existing investment units of ENA, EBS and Enron Investment Partners. Additionally, the Enron Special Asset Group will also become part of Enron Principal Investments. The strategy of Enron Principal Investments will be to work with all the business units of Enron to identify, execute, and manage equity investments, which leverage Enron s unique and proprietary knowledge. These investments may be in the form of venture capital, LBO s, traditional private equity and distressed debt positions. Kevin Garland will serve as Managing Director, overseeing all activities of Enron Principal Investments. Gene Humphrey, Michael Miller, Dick Lydecker, and their groups, will join Kevin and his group to form Enron Principal Investments. This new business unit will report to an investment committee, consisting of Greg Whalley, Ken Rice and Dave Delainey. Please join me in congratulating and supporting Kevin, Gene, Michael, Dick and the other members of this group in this effort.Jeff Skilling |
—-
Fortune Magazine Really Liked Enron’s Reputation in 2000:
| mary.clark@enron.com | andrew.fastow@enron.com | 2000-10-10 05:15:00 | Wouldn t it be great to be named Most Innovative six years straight?Anything is possible at Enron. You were selected to participate in this year s Fortune Survey of Corporate Reputations. By now, you should have received a letter and a survey from Fortune. The information you provide will be used to select America s Most Admired Companies, as well as the Most Innovative Company in America for 2000 (Enron, right?). Please complete your survey and send it to me. I am collecting all the surveys and will send them together to the Fortune analysts. If you have already completed your survey and returned it to Fortune — that s okay — just let me know so I can mark your name off my list.Thanks for you assistance.
—-
Comments on S.E.C. insider trading rules:
“the new rule may actually provide for greater flexibility”
| rex.rogers@enron.com | andrew.fastow@enron.com | 2000-10-12 04:28:00 | I have been asked to make a brief presentation at next Monday=01,s Executiv=e=20Committee meeting addressing a new S.E.C. insider trading rule. Although t=he=20new rule may increase exposure to liability for insider trading, certain=20provisions of the new rule may actually provide for greater flexibility in==20the timing of your personal trades in Enron Corp. common stock. Attached i=s=20a short memo addressing our current Company procedures and policies for=20trading, the new S.E.C. rule, and some suggestions for alternatives that yo=u=20may want to consider concerning your personal trades in Enron Corp. common==20stock. If anyone wants to discuss the new rule and the trading alternative=s=20provided by the new rule before next week=01,s meeting, please don=01,t hes=itate to=20give me a call at 713-853-3069. Thank you.Attachments
—-
Public announcement of an offer:
| mark.palmer@enron.com | andrew.fastow@enron.com | 2000-10-26 09:55:00 | Attached is the final draft of the press release relating to Project True Blue. It has been approved by the deal team, outside counsel, and Investor Relations. I propose issuing the release one hour after sending the proposal letter to True Blue s board. True Blue should issue a press release acknowledging receipt of the offer as well as file the letter as part of an 8-K. True Blue s timing should be approximately one-half hour after our release.Please send any comments to me at 34738, or reply to this email.Mark Palmer
My thoughts: Wouldn’t you want to wait until the deal was actually signed by the other party first, before making any announcements?
—-
Management Meetings kept getting rescheduled in Fall 2001:
| joannie.williamson@enron.com | andrew.fastow@enron.com | 2001-09-27 10:20:32 | I apologize if there has been any confusion regarding this meeting. It was originally scheduled for October 1, then moved to October 2, then moved again to October 22. Please confirm your attendance via e-mail. An agenda will be provided prior to the meeting.Managing Director MeetingDate: Monday, October 22Time: 8:30 – Noon (Central)Location: Hyatt Regency – HoustonPlease call if you have any questions.Thanks, Joannie3-1769
| katherine.brown@enron.com | andrew.fastow@enron.com | 2000-10-23 04:50:00 | THERE WILL NOT BE AN EXECUTIVE COMMITTEE MEETING ON MONDAY, OCTOBER 30
| joannie.williamson@enron.com | andrew.fastow@enron.com | 2001-11-06 09:10:35 | —–Original Message—–From: Enron Announcements/Corp/Enron@ENRON On Behalf Of Ken Lay- Chairman of the Board@ENRONSent: Monday, November 05, 2001 10:09 AMTo: VP s and Above- Enron Management Conference List@ENRONSubject: 2001 Management ConferenceDuring this critical time, it is imperative that our management team remain focused on our business and continue to address the challenges currently facing our company. For that reason, I have decided to postpone the Enron Management Conference.The Conference will now be held Friday, February 22 – Saturday, February 23, 2002 at the Westin La Cantera Resort in San Antonio. While the Saturday meeting allows some Enron executives who cannot be away from the office during business hours to attend the Management Conference for the first time, I also recognize that it requires many of you to forfeit additional personal time on behalf of Enron. I truly appreciate your sacrifice and I sincerely encourage your attendance.The new agenda, while still being finalized, will be abbreviated but every bit as informative and worthwhile as previously planned. We ll be in touch soon with more details.Regards,Ken Lay
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.47
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5341/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5341/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5486
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5486/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5486/comments
|
https://api.github.com/repos/ollama/ollama/issues/5486/events
|
https://github.com/ollama/ollama/issues/5486
| 2,391,137,067
|
I_kwDOJ0Z1Ps6Ohdsr
| 5,486
|
Upper token limit scales with number of parallel requests
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-07-04T15:50:37
| 2024-07-04T15:50:39
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
It should be based on single parallel requests' context size
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5486/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7655
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7655/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7655/comments
|
https://api.github.com/repos/ollama/ollama/issues/7655/events
|
https://github.com/ollama/ollama/pull/7655
| 2,656,471,844
|
PR_kwDOJ0Z1Ps6B03Rr
| 7,655
|
chore(deps): bump golang.org/x dependencies
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-13T18:53:04
| 2024-11-14T21:58:27
| 2024-11-14T21:58:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7655",
"html_url": "https://github.com/ollama/ollama/pull/7655",
"diff_url": "https://github.com/ollama/ollama/pull/7655.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7655.patch",
"merged_at": "2024-11-14T21:58:25"
}
|
Update several core golang.org/x dependencies to their latest stable versions.
## Changes
- `golang.org/x/sync`: v0.3.0 → v0.9.0
- `golang.org/x/image`: v0.14.0 → v0.22.0
- `golang.org/x/text`: v0.15.0 → v0.20.0
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7655/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3495
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3495/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3495/comments
|
https://api.github.com/repos/ollama/ollama/issues/3495/events
|
https://github.com/ollama/ollama/issues/3495
| 2,226,201,109
|
I_kwDOJ0Z1Ps6EsSIV
| 3,495
|
Supporting AQML
|
{
"login": "vaiju1981",
"id": 421715,
"node_id": "MDQ6VXNlcjQyMTcxNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/421715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vaiju1981",
"html_url": "https://github.com/vaiju1981",
"followers_url": "https://api.github.com/users/vaiju1981/followers",
"following_url": "https://api.github.com/users/vaiju1981/following{/other_user}",
"gists_url": "https://api.github.com/users/vaiju1981/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vaiju1981/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vaiju1981/subscriptions",
"organizations_url": "https://api.github.com/users/vaiju1981/orgs",
"repos_url": "https://api.github.com/users/vaiju1981/repos",
"events_url": "https://api.github.com/users/vaiju1981/events{/privacy}",
"received_events_url": "https://api.github.com/users/vaiju1981/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-04-04T18:08:01
| 2024-04-19T15:41:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
Support AQML quantized model in ollama. These models have very high quantization, but at the same time they are pretty comparable to original models.
### How should we solve this?
By adding support https://github.com/Vahe1994/AQLM ( mostly via llama.cpp
### What is the impact of not solving this?
This would enable very large llm to be loaded on smaller ( cpu bound machines )
### Anything else?
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3495/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3495/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/769
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/769/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/769/comments
|
https://api.github.com/repos/ollama/ollama/issues/769/events
|
https://github.com/ollama/ollama/issues/769
| 1,940,397,616
|
I_kwDOJ0Z1Ps5zqB4w
| 769
|
Provide script to pull model manifest and files with curl
|
{
"login": "ctsrc",
"id": 36199671,
"node_id": "MDQ6VXNlcjM2MTk5Njcx",
"avatar_url": "https://avatars.githubusercontent.com/u/36199671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ctsrc",
"html_url": "https://github.com/ctsrc",
"followers_url": "https://api.github.com/users/ctsrc/followers",
"following_url": "https://api.github.com/users/ctsrc/following{/other_user}",
"gists_url": "https://api.github.com/users/ctsrc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ctsrc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ctsrc/subscriptions",
"organizations_url": "https://api.github.com/users/ctsrc/orgs",
"repos_url": "https://api.github.com/users/ctsrc/repos",
"events_url": "https://api.github.com/users/ctsrc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ctsrc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-10-12T16:49:08
| 2023-10-25T18:21:51
| 2023-10-12T17:03:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, because my computer is behind a http proxy and I don't manage to make ollama pull via the proxy I would like to manually pull the files I need using curl
First, if I try with ollama itself to pull for example codellama:34b-code from https://ollama.ai/library/codellama/tags
```zsh
ollama pull codellama:34b-code
```
which doesn't work for me because of the http proxy but it says where to get the manifest
```text
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/codellama/manifests/34b-code": dial tcp: lookup registry.ollama.ai: no such host
```
But then if I try to retrieve with curl (which I've configured to be aware of the http proxy) using the url mentioned in the output
```zsh
curl https://registry.ollama.ai/v2/library/codellama/manifests/34b-code
```
I get this error:
```text
{"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]}
```
I would like that a small shell script could be included with ollama, that will take the name of a model to pull and then uses `curl` to pull the manifest and the model files, so that it is possible to pull via http proxy. The script only needs to use curl, and does not need to be written to account for http proxy. Local configuration of curl will apply. So in theory it should be pretty straight forward to write such a script for anyone that knows the correct URL to pull manifest from etc. (And for extracting any data from json responses, `jq` can be used in the script.)
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/769/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/6777
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6777/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6777/comments
|
https://api.github.com/repos/ollama/ollama/issues/6777/events
|
https://github.com/ollama/ollama/issues/6777
| 2,522,817,414
|
I_kwDOJ0Z1Ps6WXyOG
| 6,777
|
Attribute about model's tool use capability in model_info
|
{
"login": "StarPet",
"id": 85790781,
"node_id": "MDQ6VXNlcjg1NzkwNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/85790781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StarPet",
"html_url": "https://github.com/StarPet",
"followers_url": "https://api.github.com/users/StarPet/followers",
"following_url": "https://api.github.com/users/StarPet/following{/other_user}",
"gists_url": "https://api.github.com/users/StarPet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StarPet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StarPet/subscriptions",
"organizations_url": "https://api.github.com/users/StarPet/orgs",
"repos_url": "https://api.github.com/users/StarPet/repos",
"events_url": "https://api.github.com/users/StarPet/events{/privacy}",
"received_events_url": "https://api.github.com/users/StarPet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-09-12T16:10:35
| 2024-09-13T01:17:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
In the current 'model_info' I'm missing a attribute that tells me that the model is capable of handling tool calls. One may check the template data for "$.Tools", which I find rather ugly. Therefore, I propose to add a an attribute like, e.g.:
```
general.supports_tool_calls: true
```
or similar attribute. If you plan to have different types of tool calls, or want to plan ahead for checking call compatibility you may want to add an attribute like:
```
general.tool_format: "1.0"
```
br,
Peter
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6777/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6777/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/657
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/657/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/657/comments
|
https://api.github.com/repos/ollama/ollama/issues/657/events
|
https://github.com/ollama/ollama/issues/657
| 1,920,224,997
|
I_kwDOJ0Z1Ps5ydE7l
| 657
|
Chat completion endpoint
|
{
"login": "zifeo",
"id": 9053709,
"node_id": "MDQ6VXNlcjkwNTM3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9053709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zifeo",
"html_url": "https://github.com/zifeo",
"followers_url": "https://api.github.com/users/zifeo/followers",
"following_url": "https://api.github.com/users/zifeo/following{/other_user}",
"gists_url": "https://api.github.com/users/zifeo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zifeo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zifeo/subscriptions",
"organizations_url": "https://api.github.com/users/zifeo/orgs",
"repos_url": "https://api.github.com/users/zifeo/repos",
"events_url": "https://api.github.com/users/zifeo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zifeo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-09-30T11:36:11
| 2023-10-02T20:02:09
| 2023-10-02T20:02:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Most of the UI are compatible with OpenAI endpoint definitions. Would it be possible to support the same format on ollama so frontend could be easily plugged into? See https://docs.typingmind.com/other-resources/how-tos/use-custom-models-or-local-models-in-typing-mind-(vicuna-alpaca-llama-gpt4all-dolly-etc.).
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/657/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8270
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8270/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8270/comments
|
https://api.github.com/repos/ollama/ollama/issues/8270/events
|
https://github.com/ollama/ollama/issues/8270
| 2,763,733,391
|
I_kwDOJ0Z1Ps6kuzmP
| 8,270
|
Incorrect NUMA detection logic, fails for AMD Threadripper 1950X
|
{
"login": "lukedd",
"id": 2254591,
"node_id": "MDQ6VXNlcjIyNTQ1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2254591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukedd",
"html_url": "https://github.com/lukedd",
"followers_url": "https://api.github.com/users/lukedd/followers",
"following_url": "https://api.github.com/users/lukedd/following{/other_user}",
"gists_url": "https://api.github.com/users/lukedd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukedd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukedd/subscriptions",
"organizations_url": "https://api.github.com/users/lukedd/orgs",
"repos_url": "https://api.github.com/users/lukedd/repos",
"events_url": "https://api.github.com/users/lukedd/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukedd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-12-30T21:52:33
| 2024-12-30T22:13:27
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
On my AMD Threadripper 1950X CPU with NUMA mode enabled in the BIOS, ollama does not detect that I am running on a NUMA system due to flawed logic in its detection code here: https://github.com/ollama/ollama/blob/459d822b5188dba051e21dfd15b6552543a4bbcf/discover/cpu_common.go#L10-L24
I can "trick" ollama into detecting NUMA by setting up fake information in `/sys/devices/system/cpu/cpu*/topology/physical_package_id` using `overlayfs`, which gives me a ~20% speedup for CPU-only eval-rate (tested with gemma2:27b).
The problem in the logic is that it counts how many physical CPU packages are in the system, but my system has a single CPU package containing 2 dies each with their own memory controller.
A naïve fix would be to look at `die_id` rather than `physical_package_id`: this would work for me but I fear there may exist other hardware which has multiple dies sharing a single memory controller. Also even on my system I can disable NUMA in the BIOS so that memory access appears to be uniform - under the hood this interleaves memory access across both NUMA nodes. So in this mode looking at `die_id` would give the wrong answer.
A better fix would be to look at the actual NUMA node information presented by the kernel under `/sys/devices/system/node`, e.g. on my system the file `/sys/devices/system/node/online` contains `0-1` whereas on a uniform memory system it contains `0`.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8270/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/484
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/484/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/484/comments
|
https://api.github.com/repos/ollama/ollama/issues/484/events
|
https://github.com/ollama/ollama/issues/484
| 1,885,880,426
|
I_kwDOJ0Z1Ps5waEBq
| 484
|
`ollama run` doesn't pull model if using a remote host
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 0
| 2023-09-07T13:17:12
| 2023-09-21T17:35:15
| 2023-09-21T17:35:15
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently when running `ollama run` against a remote instance of Ollama (e.g. `OLLAMA_HOST=192.168.1.32:11434 ollama run llama2`, it will error if the model does not exist (vs pulling it). We rely on the client checking for the file here: https://github.com/jmorganca/ollama/blob/main/cmd/cmd.go#L115. Instead we can use an api such as `/api/show` or `/api/generate` to check if the model has been pulled.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/484/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8606
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8606/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8606/comments
|
https://api.github.com/repos/ollama/ollama/issues/8606/events
|
https://github.com/ollama/ollama/issues/8606
| 2,812,491,291
|
I_kwDOJ0Z1Ps6nozYb
| 8,606
|
Why doesn't my ollama use GPU
|
{
"login": "baotianxia",
"id": 68735021,
"node_id": "MDQ6VXNlcjY4NzM1MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/68735021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baotianxia",
"html_url": "https://github.com/baotianxia",
"followers_url": "https://api.github.com/users/baotianxia/followers",
"following_url": "https://api.github.com/users/baotianxia/following{/other_user}",
"gists_url": "https://api.github.com/users/baotianxia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baotianxia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baotianxia/subscriptions",
"organizations_url": "https://api.github.com/users/baotianxia/orgs",
"repos_url": "https://api.github.com/users/baotianxia/repos",
"events_url": "https://api.github.com/users/baotianxia/events{/privacy}",
"received_events_url": "https://api.github.com/users/baotianxia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 21
| 2025-01-27T09:27:24
| 2025-01-28T02:37:10
| 2025-01-28T02:37:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I installed the Nvidia driver through `used sudo apt install nvidia-driver- xxx`and the ollama display model is being used on the GPU, but my CPU usage is 100% and the GPU is 0%.



Ubuntu server 24.04
|
{
"login": "baotianxia",
"id": 68735021,
"node_id": "MDQ6VXNlcjY4NzM1MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/68735021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baotianxia",
"html_url": "https://github.com/baotianxia",
"followers_url": "https://api.github.com/users/baotianxia/followers",
"following_url": "https://api.github.com/users/baotianxia/following{/other_user}",
"gists_url": "https://api.github.com/users/baotianxia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baotianxia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baotianxia/subscriptions",
"organizations_url": "https://api.github.com/users/baotianxia/orgs",
"repos_url": "https://api.github.com/users/baotianxia/repos",
"events_url": "https://api.github.com/users/baotianxia/events{/privacy}",
"received_events_url": "https://api.github.com/users/baotianxia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8606/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7130
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7130/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7130/comments
|
https://api.github.com/repos/ollama/ollama/issues/7130/events
|
https://github.com/ollama/ollama/issues/7130
| 2,572,818,233
|
I_kwDOJ0Z1Ps6ZWhc5
| 7,130
|
GPU VRAM Usage Timeout Warnings on Embeddings Model Load
|
{
"login": "maxruby",
"id": 5504973,
"node_id": "MDQ6VXNlcjU1MDQ5NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5504973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxruby",
"html_url": "https://github.com/maxruby",
"followers_url": "https://api.github.com/users/maxruby/followers",
"following_url": "https://api.github.com/users/maxruby/following{/other_user}",
"gists_url": "https://api.github.com/users/maxruby/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxruby/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxruby/subscriptions",
"organizations_url": "https://api.github.com/users/maxruby/orgs",
"repos_url": "https://api.github.com/users/maxruby/repos",
"events_url": "https://api.github.com/users/maxruby/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxruby/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-10-08T10:46:12
| 2025-01-16T03:56:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Description:
We are experiencing repeated GPU VRAM recovery timeouts while running multiple models on the ollama platform. The GPU in use is 2x NVIDIA RTX A5000. The system logs show that the VRAM usage does not recover within the expected timeout (5+ seconds), which affects performance and stability.
The issue occurs when loading and running embedding models, particularly when switching between different models. Below is an excerpt of the log showing the repeated warnings and the affected models:
```
Okt 08 12:26:37 Aerion3 ollama[104243]: llama_model_loader: - type f32: 243 tensors
Okt 08 12:26:37 Aerion3 ollama[104243]: llama_model_loader: - type f16: 146 tensors
Okt 08 12:31:41 Aerion3 ollama[104243]: time=2024-10-08T12:31:41.710+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.167422277 model=/usr/share/ollama/.ollama/models/blobs/sha256-03aeef8493ea9a2b8da023e8d21ce77a97e83de66a692417579aa27b717cdaf3
Okt 08 12:31:41 Aerion3 ollama[104243]: time=2024-10-08T12:31:41.959+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.417004589 model=/usr/share/ollama/.ollama/models/blobs/sha256-03aeef8493ea9a2b8da023e8d21ce77a97e83de66a692417579aa27b717cdaf3
Okt 08 12:31:46 Aerion3 ollama[104243]: time=2024-10-08T12:31:46.768+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.057837537 model=/usr/share/ollama/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
```
Possible Causes under consideration:
- Insufficient VRAM: The GPU may not have enough VRAM to efficiently load and unload multiple models, leading to delays in VRAM recovery. **This seems unlikely because the `nvtop` never shows GPU consumption above 4% when the warning appears**
- Memory Fragmentation: Fragmented memory in the VRAM might be causing issues when trying to allocate new contiguous memory.
- GPU Overload: The workload may be too heavy for the GPU, especially if multiple models are loaded simultaneously.
- CUDA Memory Management: Inefficient management of CUDA memory offloading may be causing this issue.
System Information:
- GPU: 2x NVIDIA RTX A5000
- ollama Version: 0.3.12
- Model in Use: `jina-embeddings-v2-base-en:latest`, `mxbai-embed-large-v1` and other models
- VRAM Available: ~24 GiB x2
Steps to Reproduce:
- Load and run multiple models in parallel or sequentially.
- Monitor system logs for VRAM recovery warnings as models are switched or loaded.
Expected Behavior:
The system should manage VRAM more efficiently, releasing it within the timeout to avoid warnings and improve overall performance.
Request:
Please investigate possible improvements to VRAM memory management or provide guidance on how to better configure the system to avoid these timeouts.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.12
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7130/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4301
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4301/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4301/comments
|
https://api.github.com/repos/ollama/ollama/issues/4301/events
|
https://github.com/ollama/ollama/pull/4301
| 2,288,540,625
|
PR_kwDOJ0Z1Ps5vCSnn
| 4,301
|
Adds Ollama Grid Search to Community integrations on README
|
{
"login": "dezoito",
"id": 6494010,
"node_id": "MDQ6VXNlcjY0OTQwMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6494010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dezoito",
"html_url": "https://github.com/dezoito",
"followers_url": "https://api.github.com/users/dezoito/followers",
"following_url": "https://api.github.com/users/dezoito/following{/other_user}",
"gists_url": "https://api.github.com/users/dezoito/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dezoito/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dezoito/subscriptions",
"organizations_url": "https://api.github.com/users/dezoito/orgs",
"repos_url": "https://api.github.com/users/dezoito/repos",
"events_url": "https://api.github.com/users/dezoito/events{/privacy}",
"received_events_url": "https://api.github.com/users/dezoito/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-05-09T22:10:05
| 2024-11-21T19:11:55
| 2024-11-21T09:02:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4301",
"html_url": "https://github.com/ollama/ollama/pull/4301",
"diff_url": "https://github.com/ollama/ollama/pull/4301.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4301.patch",
"merged_at": "2024-11-21T09:02:46"
}
|
Adds the following content to the Community Integrations section:
### Model/Prompt Evaluation and Optimization
- [Ollama Grid Search](https://github.com/dezoito/ollama-grid-search) (Multi-platform desktop application)
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4301/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5240
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5240/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5240/comments
|
https://api.github.com/repos/ollama/ollama/issues/5240/events
|
https://github.com/ollama/ollama/issues/5240
| 2,368,742,196
|
I_kwDOJ0Z1Ps6NMCM0
| 5,240
|
[LINUX] Not using VRAM
|
{
"login": "Hhk78",
"id": 84645312,
"node_id": "MDQ6VXNlcjg0NjQ1MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/84645312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hhk78",
"html_url": "https://github.com/Hhk78",
"followers_url": "https://api.github.com/users/Hhk78/followers",
"following_url": "https://api.github.com/users/Hhk78/following{/other_user}",
"gists_url": "https://api.github.com/users/Hhk78/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hhk78/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hhk78/subscriptions",
"organizations_url": "https://api.github.com/users/Hhk78/orgs",
"repos_url": "https://api.github.com/users/Hhk78/repos",
"events_url": "https://api.github.com/users/Hhk78/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hhk78/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-06-23T17:19:14
| 2024-07-05T16:57:37
| 2024-07-05T16:57:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I install the model, 11 MB vram is used and nearly 5 GB RAM is used.
```bash
➜ ~ free -h
total used free shared buff/cache available
Mem: 31Gi 4,6Gi 20Gi 697Mi 7,2Gi 26Gi
Swap: 0B 0B 0B
```
The model I use : sunapi386/llama-3-lexi-uncensored:8b
CPU: 12th Gen Intel i7-12650H (16) @ 4.600GHz
GPU: NVIDIA GeForce RTX 3050 Mobile
RAM: 32 GB
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.1.45
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5240/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1936
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1936/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1936/comments
|
https://api.github.com/repos/ollama/ollama/issues/1936/events
|
https://github.com/ollama/ollama/pull/1936
| 2,077,760,514
|
PR_kwDOJ0Z1Ps5j3hMd
| 1,936
|
Convert the REPL to use /api/chat for interactive responses
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-11T23:15:31
| 2024-01-12T20:05:53
| 2024-01-12T20:05:52
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1936",
"html_url": "https://github.com/ollama/ollama/pull/1936",
"diff_url": "https://github.com/ollama/ollama/pull/1936.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1936.patch",
"merged_at": "2024-01-12T20:05:52"
}
|
This change switches the REPL to use `/api/chat` when running in interactive mode. It will still use `/api/generate` for non-interactive sessions. I've also attempted to DRY out the display response for calls to either end point to be able to properly do word wrapping.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1936/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7280
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7280/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7280/comments
|
https://api.github.com/repos/ollama/ollama/issues/7280/events
|
https://github.com/ollama/ollama/issues/7280
| 2,600,912,981
|
I_kwDOJ0Z1Ps6bBshV
| 7,280
|
When server is bound to 0.0.0.0, it should allow also communication redirected by netsh to localhost (issue specific to with WSL2)
|
{
"login": "mmb78",
"id": 62362216,
"node_id": "MDQ6VXNlcjYyMzYyMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/62362216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmb78",
"html_url": "https://github.com/mmb78",
"followers_url": "https://api.github.com/users/mmb78/followers",
"following_url": "https://api.github.com/users/mmb78/following{/other_user}",
"gists_url": "https://api.github.com/users/mmb78/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmb78/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmb78/subscriptions",
"organizations_url": "https://api.github.com/users/mmb78/orgs",
"repos_url": "https://api.github.com/users/mmb78/repos",
"events_url": "https://api.github.com/users/mmb78/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmb78/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677675697,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgU-sQ",
"url": "https://api.github.com/repos/ollama/ollama/labels/wsl",
"name": "wsl",
"color": "7E0821",
"default": false,
"description": "Issues using WSL"
}
] |
open
| false
| null |
[] | null | 0
| 2024-10-20T22:04:00
| 2024-10-29T17:45:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have ollama server running within WSL2, on Win10. I want to access it from outside. The WSL2 needs to extra tricks to get the network traffic to reach it.
When I set a netsh rule that takes the outside traffic (allowed by Win firewall) and redirects to "WSL2-IP":11434
netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=64065 connectaddress=172.18.200.13 connectport=11434
it all works when ollama config has this:
Environment="OLLAMA_HOST=0.0.0.0"
I can connect to: http://"machine IP":64065
and get ollama to respond!
But the problem is that the WSL2 IP is dynamic and will change .. so ideally I would use this netsh rule:
netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=64065 connectaddress=localhost connectport=11434
and just keep ollama listening only on localhost
but this somehow does not work and the communication is lost.
Interestingly enough using localhost in netsh for open-webui sever works well!
Maybe I miss some issue but I could not find another solution to this.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7280/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4711
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4711/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4711/comments
|
https://api.github.com/repos/ollama/ollama/issues/4711/events
|
https://github.com/ollama/ollama/issues/4711
| 2,324,320,104
|
I_kwDOJ0Z1Ps6Kik9o
| 4,711
|
Adding function calling support for Agents management
|
{
"login": "flefevre",
"id": 5609620,
"node_id": "MDQ6VXNlcjU2MDk2MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5609620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flefevre",
"html_url": "https://github.com/flefevre",
"followers_url": "https://api.github.com/users/flefevre/followers",
"following_url": "https://api.github.com/users/flefevre/following{/other_user}",
"gists_url": "https://api.github.com/users/flefevre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flefevre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flefevre/subscriptions",
"organizations_url": "https://api.github.com/users/flefevre/orgs",
"repos_url": "https://api.github.com/users/flefevre/repos",
"events_url": "https://api.github.com/users/flefevre/events{/privacy}",
"received_events_url": "https://api.github.com/users/flefevre/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-29T22:05:05
| 2024-07-26T05:34:02
| 2024-07-26T00:47:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am trying to use Ollama inside flowise with the concepts of Agents.
But it seems that it cannot use Ollama due to
"Only compatible with models that are capable of function calling: ChatOpenAI, ChatMistral, ChatAnthropic, ChatGoogleGenerativeAI, GroqChat. Best result with GPT-4 model"
is it due to the fact I was calling "llama3:8b-instruct-q8_0" with Ollama?
Or is it because Ollama is not compatible with "function calling".
https://www.youtube.com/watch?v=284Z8k7yJRE
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4711/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6035
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6035/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6035/comments
|
https://api.github.com/repos/ollama/ollama/issues/6035/events
|
https://github.com/ollama/ollama/pull/6035
| 2,434,304,578
|
PR_kwDOJ0Z1Ps52r7F9
| 6,035
|
Update install.sh:Replace "command -v" with encapsulated functionality
|
{
"login": "wangqingfree",
"id": 28502216,
"node_id": "MDQ6VXNlcjI4NTAyMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28502216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangqingfree",
"html_url": "https://github.com/wangqingfree",
"followers_url": "https://api.github.com/users/wangqingfree/followers",
"following_url": "https://api.github.com/users/wangqingfree/following{/other_user}",
"gists_url": "https://api.github.com/users/wangqingfree/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangqingfree/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangqingfree/subscriptions",
"organizations_url": "https://api.github.com/users/wangqingfree/orgs",
"repos_url": "https://api.github.com/users/wangqingfree/repos",
"events_url": "https://api.github.com/users/wangqingfree/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangqingfree/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-29T02:12:51
| 2024-09-05T16:49:48
| 2024-09-05T16:49:48
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6035",
"html_url": "https://github.com/ollama/ollama/pull/6035",
"diff_url": "https://github.com/ollama/ollama/pull/6035.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6035.patch",
"merged_at": "2024-09-05T16:49:48"
}
|
Replace "command -v" with encapsulated functionality
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6035/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5865
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5865/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5865/comments
|
https://api.github.com/repos/ollama/ollama/issues/5865/events
|
https://github.com/ollama/ollama/issues/5865
| 2,424,137,321
|
I_kwDOJ0Z1Ps6QfWZp
| 5,865
|
无限更新
|
{
"login": "yuchenwei28",
"id": 141537882,
"node_id": "U_kgDOCG-yWg",
"avatar_url": "https://avatars.githubusercontent.com/u/141537882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuchenwei28",
"html_url": "https://github.com/yuchenwei28",
"followers_url": "https://api.github.com/users/yuchenwei28/followers",
"following_url": "https://api.github.com/users/yuchenwei28/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenwei28/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuchenwei28/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenwei28/subscriptions",
"organizations_url": "https://api.github.com/users/yuchenwei28/orgs",
"repos_url": "https://api.github.com/users/yuchenwei28/repos",
"events_url": "https://api.github.com/users/yuchenwei28/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuchenwei28/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-23T02:58:58
| 2024-07-23T14:24:50
| 2024-07-23T14:24:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
无限更新到2.0.8
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "yuchenwei28",
"id": 141537882,
"node_id": "U_kgDOCG-yWg",
"avatar_url": "https://avatars.githubusercontent.com/u/141537882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuchenwei28",
"html_url": "https://github.com/yuchenwei28",
"followers_url": "https://api.github.com/users/yuchenwei28/followers",
"following_url": "https://api.github.com/users/yuchenwei28/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenwei28/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuchenwei28/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenwei28/subscriptions",
"organizations_url": "https://api.github.com/users/yuchenwei28/orgs",
"repos_url": "https://api.github.com/users/yuchenwei28/repos",
"events_url": "https://api.github.com/users/yuchenwei28/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuchenwei28/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5865/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6536
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6536/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6536/comments
|
https://api.github.com/repos/ollama/ollama/issues/6536/events
|
https://github.com/ollama/ollama/pull/6536
| 2,490,542,649
|
PR_kwDOJ0Z1Ps55omB9
| 6,536
|
Embeddings fixes
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-27T23:35:34
| 2024-08-27T23:49:15
| 2024-08-27T23:49:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6536",
"html_url": "https://github.com/ollama/ollama/pull/6536",
"diff_url": "https://github.com/ollama/ollama/pull/6536.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6536.patch",
"merged_at": "2024-08-27T23:49:14"
}
| null |
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6536/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1870
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1870/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1870/comments
|
https://api.github.com/repos/ollama/ollama/issues/1870/events
|
https://github.com/ollama/ollama/issues/1870
| 2,072,664,043
|
I_kwDOJ0Z1Ps57ilfr
| 1,870
|
last update broke something on my late 2012 imac
|
{
"login": "umtksa",
"id": 12473742,
"node_id": "MDQ6VXNlcjEyNDczNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/12473742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/umtksa",
"html_url": "https://github.com/umtksa",
"followers_url": "https://api.github.com/users/umtksa/followers",
"following_url": "https://api.github.com/users/umtksa/following{/other_user}",
"gists_url": "https://api.github.com/users/umtksa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/umtksa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/umtksa/subscriptions",
"organizations_url": "https://api.github.com/users/umtksa/orgs",
"repos_url": "https://api.github.com/users/umtksa/repos",
"events_url": "https://api.github.com/users/umtksa/events{/privacy}",
"received_events_url": "https://api.github.com/users/umtksa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-01-09T15:56:51
| 2024-01-10T06:58:57
| 2024-01-10T00:51:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
dyld: Symbol not found: _OBJC_CLASS_$_MTLComputePassDescriptor
Referenced from: /usr/local/bin/ollama (which was built for Mac OS X 11.3)
Expected in: /System/Library/Frameworks/Metal.framework/Versions/A/Metal
in /usr/local/bin/ollama
I was using mistral and mixtral now I cannot even use tinyllama :/
any suggestion?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1870/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4365
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4365/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4365/comments
|
https://api.github.com/repos/ollama/ollama/issues/4365/events
|
https://github.com/ollama/ollama/issues/4365
| 2,290,953,935
|
I_kwDOJ0Z1Ps6IjS7P
| 4,365
|
llava can't run
|
{
"login": "Elminsst",
"id": 130235860,
"node_id": "U_kgDOB8M91A",
"avatar_url": "https://avatars.githubusercontent.com/u/130235860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Elminsst",
"html_url": "https://github.com/Elminsst",
"followers_url": "https://api.github.com/users/Elminsst/followers",
"following_url": "https://api.github.com/users/Elminsst/following{/other_user}",
"gists_url": "https://api.github.com/users/Elminsst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Elminsst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Elminsst/subscriptions",
"organizations_url": "https://api.github.com/users/Elminsst/orgs",
"repos_url": "https://api.github.com/users/Elminsst/repos",
"events_url": "https://api.github.com/users/Elminsst/events{/privacy}",
"received_events_url": "https://api.github.com/users/Elminsst/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-05-11T15:12:34
| 2024-07-17T16:17:09
| 2024-07-17T16:17:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama run llava
but it didn't work

the sever.log is
[GIN] 2024/05/11 - 23:10:27 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/11 - 23:10:27 | 200 | 1.0406ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/11 - 23:10:27 | 200 | 513.8µs | 127.0.0.1 | POST "/api/show"
time=2024-05-11T23:10:27.560+08:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="14.9 GiB" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.weights.total="3.9 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="181.0 MiB"
time=2024-05-11T23:10:27.565+08:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="14.9 GiB" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.weights.total="3.9 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="181.0 MiB"
time=2024-05-11T23:10:27.566+08:00 level=WARN source=server.go:207 msg="multimodal models don't support parallel requests yet"
time=2024-05-11T23:10:27.576+08:00 level=INFO source=server.go:318 msg="starting llama server" cmd="C:\\Users\\Elmin\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\AI\\语言模型\\models\\Repository\\blobs\\sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --mmproj D:\\AI\\语言模型\\models\\Repository\\blobs\\sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 --parallel 1 --port 14750"
time=2024-05-11T23:10:27.580+08:00 level=INFO source=sched.go:333 msg="loaded runners" count=1
time=2024-05-11T23:10:27.580+08:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding"
time=2024-05-11T23:10:27.581+08:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=2770 commit="952d03d" tid="38720" timestamp=1715440227
INFO [wmain] system info | n_threads=10 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="38720" timestamp=1715440227 total_threads=20
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="14750" tid="38720" timestamp=1715440227
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
ERROR [load_model] unable to load clip model | model="D:\\AI\\语言模型\\models\\Repository\\blobs\\sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539" tid="38720" timestamp=1715440227
time=2024-05-11T23:10:27.832+08:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "
[GIN] 2024/05/11 - 23:10:27 | 500 | 588.9624ms | 127.0.0.1 | POST "/api/chat"
time=2024-05-11T23:10:32.924+08:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0920576
time=2024-05-11T23:10:33.174+08:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.3417755
time=2024-05-11T23:10:33.424+08:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5916134
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.36
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4365/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6226
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6226/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6226/comments
|
https://api.github.com/repos/ollama/ollama/issues/6226/events
|
https://github.com/ollama/ollama/issues/6226
| 2,452,822,165
|
I_kwDOJ0Z1Ps6SMxiV
| 6,226
|
Error: unexpected EOF:
|
{
"login": "KangInKoo",
"id": 47407250,
"node_id": "MDQ6VXNlcjQ3NDA3MjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/47407250?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KangInKoo",
"html_url": "https://github.com/KangInKoo",
"followers_url": "https://api.github.com/users/KangInKoo/followers",
"following_url": "https://api.github.com/users/KangInKoo/following{/other_user}",
"gists_url": "https://api.github.com/users/KangInKoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KangInKoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KangInKoo/subscriptions",
"organizations_url": "https://api.github.com/users/KangInKoo/orgs",
"repos_url": "https://api.github.com/users/KangInKoo/repos",
"events_url": "https://api.github.com/users/KangInKoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/KangInKoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-08-07T07:54:16
| 2024-09-06T00:59:11
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
hi? I'm studying fine tuning.
I learned using the "unsloth/gemma-2-2b-it" model.
I created the dataset myself and it contains less than 100 cases.
I want to use only the fine-tuned model without combining it with the existing model.
I was able to use the fine-tuned model using the code below.
`
pipe_finetuned = pipeline(
"text-generation",
model = finetuned_model,
tokenizer = tokenizer,
max_new_tokens = 512
)
outputs = pipe_finetuned(
prompt,
do_sample = True,
temperature = 0.35,
top_k = 5,
top_p = 0.95,
add_special_tokens = True
)
print(outputs[0]["generated_text"])
`
Finally, I plan to deploy the fine-tuned model to ollama.
So I created a gguf file using llama.cpp.
After that I created a Modelfile.
`
FROM gemma-2-2B-it-F16.gguf
TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>{{ end }}
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER <|eot_id|>
`
And I did ollama create, but an EOF error occurred.
`
ollama create gemma2 -f Modelfile
`
How can I fix this error?
help me ....

### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
ollama version is 0.1.47
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6226/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5933
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5933/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5933/comments
|
https://api.github.com/repos/ollama/ollama/issues/5933/events
|
https://github.com/ollama/ollama/pull/5933
| 2,428,587,226
|
PR_kwDOJ0Z1Ps52ZRvr
| 5,933
|
update readme to llama3.1
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-24T22:45:12
| 2024-07-28T21:21:40
| 2024-07-28T21:21:38
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5933",
"html_url": "https://github.com/ollama/ollama/pull/5933",
"diff_url": "https://github.com/ollama/ollama/pull/5933.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5933.patch",
"merged_at": "2024-07-28T21:21:38"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5933/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3641
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3641/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3641/comments
|
https://api.github.com/repos/ollama/ollama/issues/3641/events
|
https://github.com/ollama/ollama/pull/3641
| 2,242,367,650
|
PR_kwDOJ0Z1Ps5sm11e
| 3,641
|
app: gracefully shut down `ollama serve` on windows
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-14T21:08:23
| 2024-04-14T22:33:26
| 2024-04-14T22:33:25
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3641",
"html_url": "https://github.com/ollama/ollama/pull/3641",
"diff_url": "https://github.com/ollama/ollama/pull/3641.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3641.patch",
"merged_at": "2024-04-14T22:33:25"
}
|
Fixes https://github.com/ollama/ollama/issues/3623
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3641/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4009
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4009/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4009/comments
|
https://api.github.com/repos/ollama/ollama/issues/4009/events
|
https://github.com/ollama/ollama/pull/4009
| 2,267,809,442
|
PR_kwDOJ0Z1Ps5t8qlI
| 4,009
|
Fix concurrency for CPU mode
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-28T20:48:26
| 2024-04-28T21:20:31
| 2024-04-28T21:20:28
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4009",
"html_url": "https://github.com/ollama/ollama/pull/4009",
"diff_url": "https://github.com/ollama/ollama/pull/4009.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4009.patch",
"merged_at": "2024-04-28T21:20:28"
}
|
Prior refactoring passes on #3418 accidentally removed the logic to bypass VRAM checks for CPU loads. This adds that back, along with test coverage.
This also fixes loaded map access in the unit test to be behind the mutex which was likely the cause of various flakes in the tests.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4009/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.