url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/6080
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6080/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6080/comments
https://api.github.com/repos/ollama/ollama/issues/6080/events
https://github.com/ollama/ollama/issues/6080
2,438,714,003
I_kwDOJ0Z1Ps6RW9KT
6,080
Incorrect free VRAM reporting when two CUDA cards with different VRAM capacities are installed, preventing Ollama from using GPU inference
{ "login": "XJTU-WXY", "id": 132470925, "node_id": "U_kgDOB-VYjQ", "avatar_url": "https://avatars.githubusercontent.com/u/132470925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XJTU-WXY", "html_url": "https://github.com/XJTU-WXY", "followers_url": "https://api.github.com/users/XJTU-WXY/followers", "following_url": "https://api.github.com/users/XJTU-WXY/following{/other_user}", "gists_url": "https://api.github.com/users/XJTU-WXY/gists{/gist_id}", "starred_url": "https://api.github.com/users/XJTU-WXY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XJTU-WXY/subscriptions", "organizations_url": "https://api.github.com/users/XJTU-WXY/orgs", "repos_url": "https://api.github.com/users/XJTU-WXY/repos", "events_url": "https://api.github.com/users/XJTU-WXY/events{/privacy}", "received_events_url": "https://api.github.com/users/XJTU-WXY/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
5
2024-07-30T21:29:35
2024-11-05T23:21:39
2024-11-05T23:21:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Dear ollama developer: First of all, thank you very much for developing and maintaining ollama. Open source leads the world to a brighter future! I use the _gemma2:27b_ model, my problem is: - When my device only has a Tesla P40 (with 24G VRAM) installed, ollama can automatically use GPU inference and run very well. - When I also install a Quadro K620 (with 2G VRAM) for display output, ollama cannot use P40 and is forced to use CPU inference. **The nvidia-smi output is:** ![image](https://github.com/user-attachments/assets/5e3c2b91-7c40-44cc-b0ee-bb614b666032) **The server.log is** [server-2.log](https://github.com/user-attachments/files/16433537/server-2.log) I set the evironment variable _"CUDA_VISIBLE_DEVICES"_ to the uuid of my P40, and the log `time=2024-07-31T04:44:20.388+08:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-085790c7-bee0-4de1-db17-6685d68470ca library=cuda compute=6.1 driver=12.4 name="Tesla P40" total="23.9 GiB" available="23.7 GiB"` seems to tell that ollama **has found the P40 card.** However, the next log `time=2024-07-31T04:44:37.793+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=47 layers.offload=0 layers.split="" memory.available="[1.6 GiB]" memory.required.full="15.3 GiB" memory.required.partial="0 B" memory.required.kv="736.0 MiB" memory.required.allocations="[0 B]" memory.weights.total="14.4 GiB" memory.weights.repeating="13.5 GiB" memory.weights.nonrepeating="922.9 MiB" memory.graph.full="509.0 MiB" memory.graph.partial="1.4 GiB"` seems to tell that **available VRAM was just 1.6GB** which was exactly **the free VRAM of my K620**, and then ollama used cpu_avx2 to run the inference. I guess here Ollama ignored the CUDA_VISIBLE_DEVICES environment variable I set, and detected the free VRAM of K620 instead of P40, and automatically used CPU reasoning after finding that gemma2:27b could not be run with only 1.5GB of VRAM. I don't know Golang, so I wonder if this is a bug in ollama's judgment of free VRAM. I hope you can check this problem, thanks! ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.0
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6080/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2356
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2356/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2356/comments
https://api.github.com/repos/ollama/ollama/issues/2356/events
https://github.com/ollama/ollama/issues/2356
2,117,604,218
I_kwDOJ0Z1Ps5-OBN6
2,356
Phi modelfile is incorrect
{ "login": "mak448a", "id": 94062293, "node_id": "U_kgDOBZtG1Q", "avatar_url": "https://avatars.githubusercontent.com/u/94062293?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mak448a", "html_url": "https://github.com/mak448a", "followers_url": "https://api.github.com/users/mak448a/followers", "following_url": "https://api.github.com/users/mak448a/following{/other_user}", "gists_url": "https://api.github.com/users/mak448a/gists{/gist_id}", "starred_url": "https://api.github.com/users/mak448a/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mak448a/subscriptions", "organizations_url": "https://api.github.com/users/mak448a/orgs", "repos_url": "https://api.github.com/users/mak448a/repos", "events_url": "https://api.github.com/users/mak448a/events{/privacy}", "received_events_url": "https://api.github.com/users/mak448a/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
7
2024-02-05T03:32:12
2024-03-13T01:05:19
2024-03-12T18:40:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When I use phi ollama and put in the system prompt, it doesn't respond as well as it does in LM Studio. Is the internal prompt in ollama correct? LM Studio uses "Instruct:" and "Output:" as markers for the user's message and the assistant's message. LM Studio: `{"speech": "Hi!", "program": "null"}` Ollama: ` Welcome to our chatbot program. How can I assist you today?` Here's the code I used: ```python import ollama prompt = """You are Daniel. Give a response as a JSON object with properties "speech" and "program". Both of these keys must always be filled. Do not reply with anything else other than a JSON object. Example of JSON object: {"speech": "Hi!", "program": "null"} Instruct: Hello! Output: {"speech": "Hi!", "program": "null"} Instruct: Can you open discord? Output: {"speech": "Certainly!", "program": "discord"} Instruct: Can you open firefox? Output: {"speech": "Certainly! Here it is!", "program": "firefox"} Instruct: Turn off the computer. Output: {"speech": "Sure, I'll do that.", "program": "shutdown"} Instruct: Goodnight. Output: {"speech": "You too!", "program": "null"}""" response = ollama.chat(model="phi", messages=[ { "role": "system", "content": prompt }, { "role": "user", "content": "Hello!" }, ], stream=True ) for chunk in response: print(chunk['message']['content'], end='', flush=True) ``` Also, should I post this in ollama-python instead of the main ollama repo?
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2356/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1535
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1535/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1535/comments
https://api.github.com/repos/ollama/ollama/issues/1535/events
https://github.com/ollama/ollama/pull/1535
2,042,775,976
PR_kwDOJ0Z1Ps5iD1Wu
1,535
add API tests for list handler
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-12-15T02:16:09
2023-12-15T02:18:26
2023-12-15T02:18:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1535", "html_url": "https://github.com/ollama/ollama/pull/1535", "diff_url": "https://github.com/ollama/ollama/pull/1535.diff", "patch_url": "https://github.com/ollama/ollama/pull/1535.patch", "merged_at": "2023-12-15T02:18:25" }
This change adds some tests for the `GET /api/list` endpoint. It includes a test that gets no models, and one that returns a single entry.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1535/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1766
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1766/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1766/comments
https://api.github.com/repos/ollama/ollama/issues/1766/events
https://github.com/ollama/ollama/pull/1766
2,064,196,581
PR_kwDOJ0Z1Ps5jJbV-
1,766
Update README.md
{ "login": "cole-gillespie", "id": 745064, "node_id": "MDQ6VXNlcjc0NTA2NA==", "avatar_url": "https://avatars.githubusercontent.com/u/745064?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cole-gillespie", "html_url": "https://github.com/cole-gillespie", "followers_url": "https://api.github.com/users/cole-gillespie/followers", "following_url": "https://api.github.com/users/cole-gillespie/following{/other_user}", "gists_url": "https://api.github.com/users/cole-gillespie/gists{/gist_id}", "starred_url": "https://api.github.com/users/cole-gillespie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cole-gillespie/subscriptions", "organizations_url": "https://api.github.com/users/cole-gillespie/orgs", "repos_url": "https://api.github.com/users/cole-gillespie/repos", "events_url": "https://api.github.com/users/cole-gillespie/events{/privacy}", "received_events_url": "https://api.github.com/users/cole-gillespie/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-01-03T15:10:12
2024-01-03T15:44:22
2024-01-03T15:44:22
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1766", "html_url": "https://github.com/ollama/ollama/pull/1766", "diff_url": "https://github.com/ollama/ollama/pull/1766.diff", "patch_url": "https://github.com/ollama/ollama/pull/1766.patch", "merged_at": "2024-01-03T15:44:22" }
fix quickstart spelling
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1766/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4893
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4893/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4893/comments
https://api.github.com/repos/ollama/ollama/issues/4893/events
https://github.com/ollama/ollama/issues/4893
2,339,497,859
I_kwDOJ0Z1Ps6LceeD
4,893
Error: error loading llama server" error="llama runner process has terminated: exit status 0xc0000409
{ "login": "Hsiayukoo", "id": 81662220, "node_id": "MDQ6VXNlcjgxNjYyMjIw", "avatar_url": "https://avatars.githubusercontent.com/u/81662220?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hsiayukoo", "html_url": "https://github.com/Hsiayukoo", "followers_url": "https://api.github.com/users/Hsiayukoo/followers", "following_url": "https://api.github.com/users/Hsiayukoo/following{/other_user}", "gists_url": "https://api.github.com/users/Hsiayukoo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hsiayukoo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hsiayukoo/subscriptions", "organizations_url": "https://api.github.com/users/Hsiayukoo/orgs", "repos_url": "https://api.github.com/users/Hsiayukoo/repos", "events_url": "https://api.github.com/users/Hsiayukoo/events{/privacy}", "received_events_url": "https://api.github.com/users/Hsiayukoo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-06-07T03:17:23
2024-06-11T03:44:25
2024-06-09T17:33:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ### 1. background I want to use **llama.cpp** to build llama2-7b model based on my own ckpt file, follow theses steps: 1. Download a [llama2-7b.Q2_k](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q2_K.gguf) from hugging face.(This gguf file can be loaded by Ollama) 2. read the gguf file and get the metadata key value pairs, **save it into json format**. 3. read the ckpt file, **convert the tensor name to gguf format and convert the tensor to numpy.ndarray** 4. follow **llama.cpp** example writer(https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/examples/writer.py) to write the example.gguf( I have checked the start of each value before the rest of the file is same as the llama-2-7b.Q2_K.gguf, even the start offset of them are the same, and I also checked that the example.gguf can be quantize by llama2.cpp) ![image](https://github.com/ollama/ollama/assets/81662220/076e7442-5bfc-49b6-b10d-a9fc93cf92b8) ### 2. what happended? **error loading llama server" error="llama runner process has terminated: exit status 0xc0000409** here is the log. ```shell [GIN] 2024/06/07 - 09:15:59 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/06/07 - 09:16:40 | 200 | 0s | 127.0.0.1 | POST "/api/blobs/sha256:1446039e892b513e16dd803d0b4ca3b8ee9c2b0c61b808f4884d070d01dc9f2f" [GIN] 2024/06/07 - 09:19:25 | 200 | 2m44s | 127.0.0.1 | POST "/api/create" [GIN] 2024/06/07 - 09:19:33 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/06/07 - 09:19:33 | 200 | 26.3074ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/06/07 - 09:19:33 | 200 | 1.1037ms | 127.0.0.1 | POST "/api/show" time=2024-06-07T09:19:34.199+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="55.3 GiB" memory.required.full="13.8 GiB" memory.required.partial="13.8 GiB" memory.required.kv="1.0 GiB" memory.weights.total="12.3 GiB" memory.weights.repeating="12.1 GiB" memory.weights.nonrepeating="250.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" time=2024-06-07T09:19:34.201+08:00 level=INFO source=server.go:341 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model D:\\ollama_models\\blobs\\sha256-1446039e892b513e16dd803d0b4ca3b8ee9c2b0c61b808f4884d070d01dc9f2f --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 64940" time=2024-06-07T09:19:34.245+08:00 level=INFO source=sched.go:338 msg="loaded runners" count=1 time=2024-06-07T09:19:34.245+08:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding" time=2024-06-07T09:19:34.245+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3051 commit="5921b8f0" tid="11088" timestamp=1717723174 INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="11088" timestamp=1717723174 total_threads=16 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="64940" tid="11088" timestamp=1717723174 llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from D:\ollama_models\blobs\sha256-1446039e892b513e16dd803d0b4ca3b8ee9c2b0c61b808f4884d070d01dc9f2f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 10 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 18: general.quantization_version u32 = 2 llama_model_loader: - type f16: 291 tensors llm_load_vocab: special tokens cache size = 259 llm_load_vocab: token to piece cache size = 0.3368 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q2_K - Medium llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 12.55 GiB (16.00 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.15 MiB time=2024-06-07T09:19:34.503+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: CPU buffer size = 12852.51 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CPU output buffer size = 0.14 MiB llama_new_context_with_model: CPU compute buffer size = 164.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml.c:10240: src1->type == GGML_TYPE_F32 && "only f32 src1 supported for now" GGML_ASSERT:time=2024-06-07T09:19:36.086+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error" time=2024-06-07T09:19:36.349+08:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 " [GIN] 2024/06/07 - 09:19:36 | 500 | 2.4369838s | 127.0.0.1 | POST "/api/chat" ``` here is the log of quantize. ```shell PS D:\gguf-mindspore\llama.cpp-master> .\bin\quantize.exe .\example.gguf .\example_q2.gguf q2_k main: build = 10 (a30cd28) main: built with gcc.exe (x86_64-posix-seh-rev1, Built by MinGW-Builds project) 13.2.0 for x86_64-w64-mingw32 main: quantizing '.\example.gguf' to '.\example_q2.gguf' as Q2_K llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from .\example.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 10 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 18: general.quantization_version u32 = 2 llama_model_loader: - type f16: 291 tensors [ 1/ 291] token_embd.weight - [ 4096, 32000, 1, 1], type = f16, converting to q2_K .. size = 250.00 MiB -> 41.02 MiB [ 2/ 291] blk.0.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 3/ 291] blk.0.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 4/ 291] blk.0.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 5/ 291] blk.0.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 6/ 291] blk.0.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 7/ 291] blk.0.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 8/ 291] blk.0.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 9/ 291] blk.0.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 10/ 291] blk.0.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 11/ 291] blk.1.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 12/ 291] blk.1.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 13/ 291] blk.1.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 14/ 291] blk.1.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 15/ 291] blk.1.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 16/ 291] blk.1.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 17/ 291] blk.1.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 18/ 291] blk.1.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 19/ 291] blk.1.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 20/ 291] blk.2.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 21/ 291] blk.2.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 22/ 291] blk.2.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 23/ 291] blk.2.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 24/ 291] blk.2.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 25/ 291] blk.2.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 26/ 291] blk.2.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 27/ 291] blk.2.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 28/ 291] blk.2.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 29/ 291] blk.3.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 30/ 291] blk.3.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 31/ 291] blk.3.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 32/ 291] blk.3.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 33/ 291] blk.3.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 34/ 291] blk.3.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 35/ 291] blk.3.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 36/ 291] blk.3.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 37/ 291] blk.3.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 38/ 291] blk.4.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 39/ 291] blk.4.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 40/ 291] blk.4.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 41/ 291] blk.4.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 42/ 291] blk.4.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 43/ 291] blk.4.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 44/ 291] blk.4.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 45/ 291] blk.4.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 46/ 291] blk.4.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 47/ 291] blk.5.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 48/ 291] blk.5.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 49/ 291] blk.5.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 50/ 291] blk.5.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 51/ 291] blk.5.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 52/ 291] blk.5.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 53/ 291] blk.5.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 54/ 291] blk.5.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 55/ 291] blk.5.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 56/ 291] blk.6.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 57/ 291] blk.6.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 58/ 291] blk.6.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 59/ 291] blk.6.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 60/ 291] blk.6.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 61/ 291] blk.6.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 62/ 291] blk.6.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 63/ 291] blk.6.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 64/ 291] blk.6.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 65/ 291] blk.7.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 66/ 291] blk.7.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 67/ 291] blk.7.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 68/ 291] blk.7.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 69/ 291] blk.7.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 70/ 291] blk.7.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 71/ 291] blk.7.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 72/ 291] blk.7.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 73/ 291] blk.7.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 74/ 291] blk.8.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 75/ 291] blk.8.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 76/ 291] blk.8.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 77/ 291] blk.8.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 78/ 291] blk.8.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 79/ 291] blk.8.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 80/ 291] blk.8.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 81/ 291] blk.8.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 82/ 291] blk.8.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 83/ 291] blk.9.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 84/ 291] blk.9.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 85/ 291] blk.9.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 86/ 291] blk.9.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 87/ 291] blk.9.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 88/ 291] blk.9.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 89/ 291] blk.9.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 90/ 291] blk.9.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 91/ 291] blk.9.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 92/ 291] blk.10.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 93/ 291] blk.10.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 94/ 291] blk.10.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 95/ 291] blk.10.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 96/ 291] blk.10.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 97/ 291] blk.10.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 98/ 291] blk.10.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 99/ 291] blk.10.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 100/ 291] blk.10.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 101/ 291] blk.11.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 102/ 291] blk.11.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 103/ 291] blk.11.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 104/ 291] blk.11.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 105/ 291] blk.11.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 106/ 291] blk.11.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 107/ 291] blk.11.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 108/ 291] blk.11.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 109/ 291] blk.11.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 110/ 291] blk.12.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 111/ 291] blk.12.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 112/ 291] blk.12.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 113/ 291] blk.12.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 114/ 291] blk.12.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 115/ 291] blk.12.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 116/ 291] blk.12.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 117/ 291] blk.12.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 118/ 291] blk.12.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 119/ 291] blk.13.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 120/ 291] blk.13.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 121/ 291] blk.13.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 122/ 291] blk.13.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 123/ 291] blk.13.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 124/ 291] blk.13.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 125/ 291] blk.13.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 126/ 291] blk.13.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 127/ 291] blk.13.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 128/ 291] blk.14.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 129/ 291] blk.14.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 130/ 291] blk.14.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 131/ 291] blk.14.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 132/ 291] blk.14.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 133/ 291] blk.14.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 134/ 291] blk.14.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 135/ 291] blk.14.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 136/ 291] blk.14.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 137/ 291] blk.15.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 138/ 291] blk.15.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 139/ 291] blk.15.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 140/ 291] blk.15.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 141/ 291] blk.15.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 142/ 291] blk.15.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 143/ 291] blk.15.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 144/ 291] blk.15.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 145/ 291] blk.15.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 146/ 291] blk.16.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 147/ 291] blk.16.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 148/ 291] blk.16.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 149/ 291] blk.16.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 150/ 291] blk.16.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 151/ 291] blk.16.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 152/ 291] blk.16.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 153/ 291] blk.16.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 154/ 291] blk.16.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 155/ 291] blk.17.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 156/ 291] blk.17.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 157/ 291] blk.17.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 158/ 291] blk.17.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 159/ 291] blk.17.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 160/ 291] blk.17.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 161/ 291] blk.17.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 162/ 291] blk.17.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 163/ 291] blk.17.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 164/ 291] blk.18.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 165/ 291] blk.18.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 166/ 291] blk.18.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 167/ 291] blk.18.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 168/ 291] blk.18.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 169/ 291] blk.18.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 170/ 291] blk.18.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 171/ 291] blk.18.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 172/ 291] blk.18.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 173/ 291] blk.19.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 174/ 291] blk.19.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 175/ 291] blk.19.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 176/ 291] blk.19.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 177/ 291] blk.19.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 178/ 291] blk.19.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 179/ 291] blk.19.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 180/ 291] blk.19.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 181/ 291] blk.19.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 182/ 291] blk.20.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 183/ 291] blk.20.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 184/ 291] blk.20.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 185/ 291] blk.20.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 186/ 291] blk.20.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 187/ 291] blk.20.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 188/ 291] blk.20.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 189/ 291] blk.20.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 190/ 291] blk.20.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 191/ 291] blk.21.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 192/ 291] blk.21.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 193/ 291] blk.21.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 194/ 291] blk.21.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 195/ 291] blk.21.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 196/ 291] blk.21.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 197/ 291] blk.21.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 198/ 291] blk.21.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 199/ 291] blk.21.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 200/ 291] blk.22.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 201/ 291] blk.22.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 202/ 291] blk.22.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 203/ 291] blk.22.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 204/ 291] blk.22.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 205/ 291] blk.22.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 206/ 291] blk.22.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 207/ 291] blk.22.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 208/ 291] blk.22.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 209/ 291] blk.23.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 210/ 291] blk.23.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 211/ 291] blk.23.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 212/ 291] blk.23.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 213/ 291] blk.23.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 214/ 291] blk.23.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 215/ 291] blk.23.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 216/ 291] blk.23.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 217/ 291] blk.23.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 218/ 291] blk.24.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 219/ 291] blk.24.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 220/ 291] blk.24.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 221/ 291] blk.24.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 222/ 291] blk.24.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 223/ 291] blk.24.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 224/ 291] blk.24.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 225/ 291] blk.24.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 226/ 291] blk.24.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 227/ 291] blk.25.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 228/ 291] blk.25.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 229/ 291] blk.25.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 230/ 291] blk.25.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 231/ 291] blk.25.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 232/ 291] blk.25.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 233/ 291] blk.25.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 234/ 291] blk.25.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 235/ 291] blk.25.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 236/ 291] blk.26.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 237/ 291] blk.26.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 238/ 291] blk.26.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 239/ 291] blk.26.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 240/ 291] blk.26.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 241/ 291] blk.26.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 242/ 291] blk.26.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 243/ 291] blk.26.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 244/ 291] blk.26.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 245/ 291] blk.27.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 246/ 291] blk.27.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 247/ 291] blk.27.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 248/ 291] blk.27.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 249/ 291] blk.27.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 250/ 291] blk.27.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 251/ 291] blk.27.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 252/ 291] blk.27.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 253/ 291] blk.27.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 254/ 291] blk.28.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 255/ 291] blk.28.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 256/ 291] blk.28.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 257/ 291] blk.28.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 258/ 291] blk.28.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 259/ 291] blk.28.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 260/ 291] blk.28.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 261/ 291] blk.28.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 262/ 291] blk.28.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 263/ 291] blk.29.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 264/ 291] blk.29.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 265/ 291] blk.29.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 266/ 291] blk.29.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 267/ 291] blk.29.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 268/ 291] blk.29.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 269/ 291] blk.29.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 270/ 291] blk.29.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 271/ 291] blk.29.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 272/ 291] blk.30.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 273/ 291] blk.30.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 274/ 291] blk.30.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 275/ 291] blk.30.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 276/ 291] blk.30.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 277/ 291] blk.30.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 278/ 291] blk.30.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 279/ 291] blk.30.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 280/ 291] blk.30.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 281/ 291] blk.31.attn_q.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 282/ 291] blk.31.attn_k.weight - [ 4096, 4096, 1, 1], type = f16, converting to q2_K .. size = 32.00 MiB -> 5.25 MiB [ 283/ 291] blk.31.attn_v.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 284/ 291] blk.31.attn_output.weight - [ 4096, 4096, 1, 1], type = f16, converting to q3_K .. size = 32.00 MiB -> 6.88 MiB [ 285/ 291] blk.31.ffn_gate.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 286/ 291] blk.31.ffn_up.weight - [ 4096, 11008, 1, 1], type = f16, converting to q2_K .. size = 86.00 MiB -> 14.11 MiB [ 287/ 291] blk.31.ffn_down.weight - [11008, 4096, 1, 1], type = f16, converting to q3_K .. size = 86.00 MiB -> 18.48 MiB [ 288/ 291] blk.31.attn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 289/ 291] blk.31.ffn_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 290/ 291] output_norm.weight - [ 4096, 1, 1, 1], type = f16, size = 0.008 MB [ 291/ 291] output.weight - [ 4096, 32000, 1, 1], type = f16, converting to q6_K .. size = 250.00 MiB -> 102.54 MiB llama_model_quantize_internal: model size = 12852.51 MB llama_model_quantize_internal: quant size = 2414.31 MB main: quantize time = 80060.02 ms main: total time = 80060.03 ms ``` ### OS Windows ### GPU Other ### CPU Intel ### Ollama version 0.1.41
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4893/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2065
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2065/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2065/comments
https://api.github.com/repos/ollama/ollama/issues/2065/events
https://github.com/ollama/ollama/issues/2065
2,089,536,146
I_kwDOJ0Z1Ps58i8qS
2,065
Any ollama command results in CORE DUMPED (ollama not using GPU)
{ "login": "Rushmore75", "id": 76796612, "node_id": "MDQ6VXNlcjc2Nzk2NjEy", "avatar_url": "https://avatars.githubusercontent.com/u/76796612?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rushmore75", "html_url": "https://github.com/Rushmore75", "followers_url": "https://api.github.com/users/Rushmore75/followers", "following_url": "https://api.github.com/users/Rushmore75/following{/other_user}", "gists_url": "https://api.github.com/users/Rushmore75/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rushmore75/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rushmore75/subscriptions", "organizations_url": "https://api.github.com/users/Rushmore75/orgs", "repos_url": "https://api.github.com/users/Rushmore75/repos", "events_url": "https://api.github.com/users/Rushmore75/events{/privacy}", "received_events_url": "https://api.github.com/users/Rushmore75/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg", "url": "https://api.github.com/repos/ollama/ollama/labels/linux", "name": "linux", "color": "516E70", "default": false, "description": "" } ]
closed
false
null
[]
null
8
2024-01-19T04:08:11
2024-03-11T17:59:19
2024-03-11T17:59:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Trying to interact with the command at all just returns `Illegal instruction (core dumped)`. The journalctl logs just show ``` Started Ollama Service ollama.service: Main process exited, code=dumped, status=4/ILL ollama.service: Failed with result 'core-dump; ``` System: Kernel: 5.15.0-91-generic Distro: Ubuntu 22.04.3 LTS Hardware: (Proxmox 8.1.3) * CPU: x86-64-v2-AES * GPU: (Passthru) Nvidia 1070 * BIOS: SeaBIOS * Machine: i440fx I would imagine it is linked to #2000 - perhaps something to so with VMs?
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2065/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/945
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/945/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/945/comments
https://api.github.com/repos/ollama/ollama/issues/945/events
https://github.com/ollama/ollama/issues/945
1,966,869,007
I_kwDOJ0Z1Ps51PAoP
945
How does one delete ollama?
{ "login": "improvethings", "id": 16601027, "node_id": "MDQ6VXNlcjE2NjAxMDI3", "avatar_url": "https://avatars.githubusercontent.com/u/16601027?v=4", "gravatar_id": "", "url": "https://api.github.com/users/improvethings", "html_url": "https://github.com/improvethings", "followers_url": "https://api.github.com/users/improvethings/followers", "following_url": "https://api.github.com/users/improvethings/following{/other_user}", "gists_url": "https://api.github.com/users/improvethings/gists{/gist_id}", "starred_url": "https://api.github.com/users/improvethings/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/improvethings/subscriptions", "organizations_url": "https://api.github.com/users/improvethings/orgs", "repos_url": "https://api.github.com/users/improvethings/repos", "events_url": "https://api.github.com/users/improvethings/events{/privacy}", "received_events_url": "https://api.github.com/users/improvethings/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-10-29T06:54:36
2023-11-20T10:35:26
2023-10-30T15:14:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I don't have much diskspace in /, and so I need to delete ollama, and reinstall it in a custom directory. Thanks in advance!
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/945/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/501
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/501/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/501/comments
https://api.github.com/repos/ollama/ollama/issues/501/events
https://github.com/ollama/ollama/issues/501
1,888,513,519
I_kwDOJ0Z1Ps5wkG3v
501
large embedded file fails on model create
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2023-09-09T00:43:48
2023-10-27T19:22:47
2023-10-27T19:22:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Adding a large file to an embedding may cause an unexpected error. ``` ollama crate exampleModel -f Modelfile ... Error: unexpected end to create model ``` ``` FROM codellama SYSTEM """ You are a DND game master that reviews dice rolls and responds with JSON in the following format: "{\"action\":\"do stuff\"}" """ EMBED embeds/*.txt ``` ``` 2% || (4367/151236, 31 it/s) [4m59s:1h19m37s]creating model system layer ``` There shouldn’t be a limit. The buffer size may be reaching its capacity.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/501/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/501/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/193
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/193/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/193/comments
https://api.github.com/repos/ollama/ollama/issues/193/events
https://github.com/ollama/ollama/issues/193
1,818,777,907
I_kwDOJ0Z1Ps5saFkz
193
Ability to download LLAMA2 70b
{ "login": "plannaAlain", "id": 88775056, "node_id": "MDQ6VXNlcjg4Nzc1MDU2", "avatar_url": "https://avatars.githubusercontent.com/u/88775056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/plannaAlain", "html_url": "https://github.com/plannaAlain", "followers_url": "https://api.github.com/users/plannaAlain/followers", "following_url": "https://api.github.com/users/plannaAlain/following{/other_user}", "gists_url": "https://api.github.com/users/plannaAlain/gists{/gist_id}", "starred_url": "https://api.github.com/users/plannaAlain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/plannaAlain/subscriptions", "organizations_url": "https://api.github.com/users/plannaAlain/orgs", "repos_url": "https://api.github.com/users/plannaAlain/repos", "events_url": "https://api.github.com/users/plannaAlain/events{/privacy}", "received_events_url": "https://api.github.com/users/plannaAlain/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
7
2023-07-24T16:44:44
2023-08-05T13:03:50
2023-08-04T20:04:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/193/reactions", "total_count": 12, "+1": 12, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/193/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7919
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7919/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7919/comments
https://api.github.com/repos/ollama/ollama/issues/7919/events
https://github.com/ollama/ollama/issues/7919
2,715,230,108
I_kwDOJ0Z1Ps6h1x-c
7,919
Performance decline
{ "login": "axil76", "id": 1433185, "node_id": "MDQ6VXNlcjE0MzMxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/1433185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/axil76", "html_url": "https://github.com/axil76", "followers_url": "https://api.github.com/users/axil76/followers", "following_url": "https://api.github.com/users/axil76/following{/other_user}", "gists_url": "https://api.github.com/users/axil76/gists{/gist_id}", "starred_url": "https://api.github.com/users/axil76/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/axil76/subscriptions", "organizations_url": "https://api.github.com/users/axil76/orgs", "repos_url": "https://api.github.com/users/axil76/repos", "events_url": "https://api.github.com/users/axil76/events{/privacy}", "received_events_url": "https://api.github.com/users/axil76/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
null
[]
null
16
2024-12-03T14:45:01
2025-01-13T01:32:33
2025-01-13T01:32:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am testing the Vgpu on a Vsphere 8 cluster, the drivers work on the redhat 8 os and work in docker, when the VM boots, the Ollama server responds well and after several minutes, the ollama server no longer responds Device 0: NVIDIA L40S-24C, compute capability 8.9, VMM: no time=2024-12-03T14:30:07.963Z level=INFO source=server.go:593 msg="waiting for server to become available" satus="llm server loading model" and the service no longer responds and the nvidia-persistenced service is running I don't understand where the problem comes from, when the card was mounted directly on the vm it worked in docker nvidia-smi Tue Dec 3 14:37:24 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.127.05 Driver Version: 550.127.05 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA L40S-24C Off | 00000000:02:00.0 Off | 0 | | N/A N/A P0 N/A / N/A | 12571MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| +-----------------------------------------------------------------------------------------+ ollama version ollama version is 0.4.7 thanks for your answers. ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.7
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7919/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8264
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8264/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8264/comments
https://api.github.com/repos/ollama/ollama/issues/8264/events
https://github.com/ollama/ollama/pull/8264
2,762,135,608
PR_kwDOJ0Z1Ps6GYOeT
8,264
example: add python streamlit frontend UI example
{ "login": "Talen-520", "id": 63370853, "node_id": "MDQ6VXNlcjYzMzcwODUz", "avatar_url": "https://avatars.githubusercontent.com/u/63370853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Talen-520", "html_url": "https://github.com/Talen-520", "followers_url": "https://api.github.com/users/Talen-520/followers", "following_url": "https://api.github.com/users/Talen-520/following{/other_user}", "gists_url": "https://api.github.com/users/Talen-520/gists{/gist_id}", "starred_url": "https://api.github.com/users/Talen-520/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Talen-520/subscriptions", "organizations_url": "https://api.github.com/users/Talen-520/orgs", "repos_url": "https://api.github.com/users/Talen-520/repos", "events_url": "https://api.github.com/users/Talen-520/events{/privacy}", "received_events_url": "https://api.github.com/users/Talen-520/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-12-29T07:07:21
2025-01-09T14:41:37
2025-01-08T22:58:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8264", "html_url": "https://github.com/ollama/ollama/pull/8264", "diff_url": "https://github.com/ollama/ollama/pull/8264.diff", "patch_url": "https://github.com/ollama/ollama/pull/8264.patch", "merged_at": null }
This is a simple frontend user interface built using Streamlit, benefiting Python developers with no frontend experience. The code references an existing example format
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8264/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1318
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1318/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1318/comments
https://api.github.com/repos/ollama/ollama/issues/1318/events
https://github.com/ollama/ollama/issues/1318
2,017,106,154
I_kwDOJ0Z1Ps54Opjq
1,318
How to Open Ollama Service to the Outside World with HTTPS Compatibility?
{ "login": "rehberim360", "id": 144798027, "node_id": "U_kgDOCKFxSw", "avatar_url": "https://avatars.githubusercontent.com/u/144798027?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rehberim360", "html_url": "https://github.com/rehberim360", "followers_url": "https://api.github.com/users/rehberim360/followers", "following_url": "https://api.github.com/users/rehberim360/following{/other_user}", "gists_url": "https://api.github.com/users/rehberim360/gists{/gist_id}", "starred_url": "https://api.github.com/users/rehberim360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rehberim360/subscriptions", "organizations_url": "https://api.github.com/users/rehberim360/orgs", "repos_url": "https://api.github.com/users/rehberim360/repos", "events_url": "https://api.github.com/users/rehberim360/events{/privacy}", "received_events_url": "https://api.github.com/users/rehberim360/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-11-29T17:42:09
2023-12-04T22:15:54
2023-12-04T22:15:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, Problem: The Ollama service I've installed on Google VM doesn't seem to accept incoming requests over HTTPS. I'm aiming to allow external requests to reach the server and enable HTTPS support for the Ollama service. I've taken the following steps: Server Configuration: I configured a reverse proxy using Apache2. I've correctly installed SSL/TLS certificates and attempted to establish a direct connection to the Ollama service. Firewall Settings: I've set up the necessary firewall rules on Google Cloud and ensured that the correct ports are open. Documentation and Research: I've reviewed the documentation regarding HTTPS support for the Ollama service but haven't found a definitive solution. I've searched forums and other resources but couldn't find a clear resolution. Preferred Solution: I've noticed that enabling HTTPS support for Ollama requires specific configurations, yet I haven't found a straightforward approach. Additional Information: Could any insights be shared regarding the server's current status, Ollama service configurations, or any hints related to HTTPS? I would appreciate your assistance. I need guidance or suggestions to move forward with this issue. Thank you.
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.github.com/users/technovangelist/followers", "following_url": "https://api.github.com/users/technovangelist/following{/other_user}", "gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}", "starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions", "organizations_url": "https://api.github.com/users/technovangelist/orgs", "repos_url": "https://api.github.com/users/technovangelist/repos", "events_url": "https://api.github.com/users/technovangelist/events{/privacy}", "received_events_url": "https://api.github.com/users/technovangelist/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1318/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1318/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7253
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7253/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7253/comments
https://api.github.com/repos/ollama/ollama/issues/7253/events
https://github.com/ollama/ollama/issues/7253
2,597,847,962
I_kwDOJ0Z1Ps6a2AOa
7,253
The issue regarding concurrent processing with multiple GPU cards
{ "login": "SDAIer", "id": 174102361, "node_id": "U_kgDOCmCXWQ", "avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SDAIer", "html_url": "https://github.com/SDAIer", "followers_url": "https://api.github.com/users/SDAIer/followers", "following_url": "https://api.github.com/users/SDAIer/following{/other_user}", "gists_url": "https://api.github.com/users/SDAIer/gists{/gist_id}", "starred_url": "https://api.github.com/users/SDAIer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SDAIer/subscriptions", "organizations_url": "https://api.github.com/users/SDAIer/orgs", "repos_url": "https://api.github.com/users/SDAIer/repos", "events_url": "https://api.github.com/users/SDAIer/events{/privacy}", "received_events_url": "https://api.github.com/users/SDAIer/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
8
2024-10-18T15:36:44
2024-11-01T02:50:52
2024-11-01T02:50:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ### Premise: There are 4 GPU cards in the Linux server, and OLLAMA_SCHED_SPREAD=1 is set, with the aim of improving the model's inference efficiency through concurrent processing on multiple GPU cards. ### My Scenario: In the same process, I wish to sequentially call 3 different LLM models to handle the same task (such as summarizing long text), with the intention that users can see the different content summarized by the 3 different LLM models and compare the processing effects of different models. After the process runs, it can be observed that each model runs on multiple GPU cards, but there are the following issues: 1. After the first model finishes running, the second model reports an OOM error, and the third model sometimes succeeds and sometimes fails. 2. Is this issue because after the first model finishes running, all GPU resources are not released completely, and the second model fails due to lack of GPU resources when it continues to run, while the third model succeeds if the GPU resources have been released, and fails if the GPU resources are still not completely released? 3. If OLLAMA_SCHED_SPREAD=1 is not set, all three models will run successfully because ollama will use different GPU cards to handle the requests of the three models separately, but this method is slower because each model uses a single GPU card for processing. ### My requirements are as follows: 1. If OLLAMA_SCHED_SPREAD=1 is set, how can GPU resources be quickly released after the first model finishes running to ensure that subsequent models do not fail due to insufficient GPU resources? 2. If solution 1 cannot be met, what methods can be used to improve model inference efficiency through concurrent processing on multiple GPU cards? ### Other Questions (without setting OLLAMA_SCHED_SPREAD=1): Ollama defaults to OLLAMA_NUM_PARALLEL=4, and if a single GPU card cannot meet the resources for 4 concurrent processes, ollama automatically sets PARALLEL=1. At this time, if a single GPU card can meet the resources needed for PARALLEL=1, one GPU card performs inference; and if a single GPU card cannot meet the resources needed for PARALLEL=1, ollama automatically uses 4 GPU cards to process, is this an automatic and default mechanism? ### Summary: The overall requirement is how to improve the efficiency of concurrent inference when there are multiple GPU cards, thereby enhancing the user experience. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.14
{ "login": "SDAIer", "id": 174102361, "node_id": "U_kgDOCmCXWQ", "avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SDAIer", "html_url": "https://github.com/SDAIer", "followers_url": "https://api.github.com/users/SDAIer/followers", "following_url": "https://api.github.com/users/SDAIer/following{/other_user}", "gists_url": "https://api.github.com/users/SDAIer/gists{/gist_id}", "starred_url": "https://api.github.com/users/SDAIer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SDAIer/subscriptions", "organizations_url": "https://api.github.com/users/SDAIer/orgs", "repos_url": "https://api.github.com/users/SDAIer/repos", "events_url": "https://api.github.com/users/SDAIer/events{/privacy}", "received_events_url": "https://api.github.com/users/SDAIer/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7253/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8355
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8355/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8355/comments
https://api.github.com/repos/ollama/ollama/issues/8355/events
https://github.com/ollama/ollama/issues/8355
2,776,703,481
I_kwDOJ0Z1Ps6lgSH5
8,355
we need Ollama Video-LLaVA
{ "login": "ixn3rd3mxn", "id": 119990214, "node_id": "U_kgDOBybnxg", "avatar_url": "https://avatars.githubusercontent.com/u/119990214?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ixn3rd3mxn", "html_url": "https://github.com/ixn3rd3mxn", "followers_url": "https://api.github.com/users/ixn3rd3mxn/followers", "following_url": "https://api.github.com/users/ixn3rd3mxn/following{/other_user}", "gists_url": "https://api.github.com/users/ixn3rd3mxn/gists{/gist_id}", "starred_url": "https://api.github.com/users/ixn3rd3mxn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ixn3rd3mxn/subscriptions", "organizations_url": "https://api.github.com/users/ixn3rd3mxn/orgs", "repos_url": "https://api.github.com/users/ixn3rd3mxn/repos", "events_url": "https://api.github.com/users/ixn3rd3mxn/events{/privacy}", "received_events_url": "https://api.github.com/users/ixn3rd3mxn/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-09T02:34:25
2025-01-09T03:39:08
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
i want to use Ollama Video-LLaVA model , but it model is not have in Ollama , someone can add this model to Ollama please? i just try [anas/video-llava](https://ollama.com/anas/video-llava) & [ManishThota/llava_next_video](https://ollama.com/ManishThota/llava_next_video) it not work it have bug in this issues [issues.Add Video-LLaVA](https://github.com/ollama/ollama/issues/3184) , [medium.manish-VideoLLaVA](https://medium.com/@manish.thota1999/an-experiment-to-unlock-ollamas-potential-video-question-answering-e2b4d1bfb5ba) ![{4B0D50B1-D464-4859-844B-9DFED42F774F}](https://github.com/user-attachments/assets/14223f0e-daeb-4bc4-9daf-c2b53904a958)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8355/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4303
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4303/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4303/comments
https://api.github.com/repos/ollama/ollama/issues/4303/events
https://github.com/ollama/ollama/pull/4303
2,288,553,461
PR_kwDOJ0Z1Ps5vCVXf
4,303
add project description
{ "login": "reid41", "id": 25558653, "node_id": "MDQ6VXNlcjI1NTU4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/25558653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reid41", "html_url": "https://github.com/reid41", "followers_url": "https://api.github.com/users/reid41/followers", "following_url": "https://api.github.com/users/reid41/following{/other_user}", "gists_url": "https://api.github.com/users/reid41/gists{/gist_id}", "starred_url": "https://api.github.com/users/reid41/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reid41/subscriptions", "organizations_url": "https://api.github.com/users/reid41/orgs", "repos_url": "https://api.github.com/users/reid41/repos", "events_url": "https://api.github.com/users/reid41/events{/privacy}", "received_events_url": "https://api.github.com/users/reid41/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-05-09T22:23:48
2024-11-24T23:55:09
2024-11-24T23:55:09
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4303", "html_url": "https://github.com/ollama/ollama/pull/4303", "diff_url": "https://github.com/ollama/ollama/pull/4303.diff", "patch_url": "https://github.com/ollama/ollama/pull/4303.patch", "merged_at": "2024-11-24T23:55:09" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4303/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2162
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2162/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2162/comments
https://api.github.com/repos/ollama/ollama/issues/2162/events
https://github.com/ollama/ollama/pull/2162
2,096,857,781
PR_kwDOJ0Z1Ps5k4Muw
2,162
Report more information about GPUs in verbose mode
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-01-23T19:43:51
2024-01-24T01:45:43
2024-01-24T01:45:40
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2162", "html_url": "https://github.com/ollama/ollama/pull/2162", "diff_url": "https://github.com/ollama/ollama/pull/2162.diff", "patch_url": "https://github.com/ollama/ollama/pull/2162.patch", "merged_at": "2024-01-24T01:45:40" }
This adds additional calls to both CUDA and ROCm management libraries to discover additional attributes about the GPU(s) detected in the system, and wires up runtime verbosity selection. When users hit problems with GPUs we can ask them to run with `OLLAMA_DEBUG=1 ollama serve` and share the server log. Example output on a CUDA laptop: ``` % OLLAMA_DEBUG=1 ./ollama-linux-amd64 serve ... time=2024-01-23T11:31:22.828-08:00 level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:256 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08]" CUDA driver version: 545.23.08 time=2024-01-23T11:31:22.859-08:00 level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:96 msg="Nvidia GPU detected" [0] CUDA device name: NVIDIA GeForce GTX 1650 with Max-Q Design [0] CUDA part number: nvmlDeviceGetSerial failed: 3 [0] CUDA vbios version: 90.17.31.00.26 [0] CUDA brand: 5 [0] CUDA totalMem 4294967296 [0] CUDA usedMem 3789357056 time=2024-01-23T11:31:22.865-08:00 level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:137 msg="CUDA Compute Capability detected: 7.5" ``` Example output on a ROCM GPU system ``` % OLLAMA_DEBUG=1 ./ollama-linux-amd64 serve ... time=2024-01-23T19:24:55.162Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:256 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.6.0.60000 /opt/rocm-6.0.0/lib/librocm_smi64.so.6.0.60000]" time=2024-01-23T19:24:55.163Z level=INFO source=/go/src/github.com/jmorganca/ollama/gpu/gpu.go:106 msg="Radeon GPU detected" discovered 1 ROCm GPU Devices [0] ROCm device name: Navi 31 [Radeon RX 7900 XT/7900 XTX] [0] ROCm GPU brand: Navi 31 [Radeon RX 7900 XT/7900 XTX] [0] ROCm GPU vendor: Advanced Micro Devices, Inc. [AMD/ATI] [0] ROCm GPU VRAM vendor: samsung [0] ROCm GPU S/N: 43cfeecf3446fbf7 [0] ROCm GPU subsystem name: NITRO+ RX 7900 XTX Vapor-X [0] ROCm GPU vbios version: 113-4E4710U-T4Y [0] ROCm totalMem 25753026560 [0] ROCm usedMem 27852800 ``` This also implements the TODO on ROCm to handle multiple GPUs reported by the management library.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2162/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1798
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1798/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1798/comments
https://api.github.com/repos/ollama/ollama/issues/1798/events
https://github.com/ollama/ollama/issues/1798
2,066,610,955
I_kwDOJ0Z1Ps57LfsL
1,798
failed to verify certificate: x509: certificate signed by unknown authority
{ "login": "jooyoungseo", "id": 19754711, "node_id": "MDQ6VXNlcjE5NzU0NzEx", "avatar_url": "https://avatars.githubusercontent.com/u/19754711?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jooyoungseo", "html_url": "https://github.com/jooyoungseo", "followers_url": "https://api.github.com/users/jooyoungseo/followers", "following_url": "https://api.github.com/users/jooyoungseo/following{/other_user}", "gists_url": "https://api.github.com/users/jooyoungseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/jooyoungseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jooyoungseo/subscriptions", "organizations_url": "https://api.github.com/users/jooyoungseo/orgs", "repos_url": "https://api.github.com/users/jooyoungseo/repos", "events_url": "https://api.github.com/users/jooyoungseo/events{/privacy}", "received_events_url": "https://api.github.com/users/jooyoungseo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
8
2024-01-05T02:04:27
2025-01-07T06:10:01
2024-01-08T19:03:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In my HPC system, I have to use apptainer instead of docker to run ollama. In the pulling process, I have encountered the following certificate issue. I was wondering if this could be addressed from ollama side. ``` sh Apptainer> ollama serve & [1] 2914729 Apptainer> 2024/01/04 15:51:13 images.go:737: total blobs: 0 2024/01/04 15:51:13 images.go:744: total unused blobs removed: 0 2024/01/04 15:51:13 routes.go:895: Listening on [::]:11434 (version 0.1.17) ollama pull llama2 [GIN] 2024/01/04 - 15:51:24 | 200 | 54.686µs | 127.0.0.1 | HEAD "/" 2024/01/04 15:51:24 images.go:1066: request failed: Get https://registry.ollama.ai/v2/library/llama2/manifests/latest: tls: failed to verify certificate: x509: certificate signed by unknown authority [GIN] 2024/01/04 - 15:51:24 | 200 | 19.314959ms | 127.0.0.1 | POST "/api/pull" pulling manifest Error: pull model manifest: Get https://registry.ollama.ai/v2/library/llama2/manifests/latest: tls: failed to verify certificate: x509: certificate signed by unknown authority Apptainer> ```
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1798/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/7607
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7607/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7607/comments
https://api.github.com/repos/ollama/ollama/issues/7607/events
https://github.com/ollama/ollama/pull/7607
2,647,964,758
PR_kwDOJ0Z1Ps6Bc2eD
7,607
feat: add vibe app to readme
{ "login": "thewh1teagle", "id": 61390950, "node_id": "MDQ6VXNlcjYxMzkwOTUw", "avatar_url": "https://avatars.githubusercontent.com/u/61390950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thewh1teagle", "html_url": "https://github.com/thewh1teagle", "followers_url": "https://api.github.com/users/thewh1teagle/followers", "following_url": "https://api.github.com/users/thewh1teagle/following{/other_user}", "gists_url": "https://api.github.com/users/thewh1teagle/gists{/gist_id}", "starred_url": "https://api.github.com/users/thewh1teagle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thewh1teagle/subscriptions", "organizations_url": "https://api.github.com/users/thewh1teagle/orgs", "repos_url": "https://api.github.com/users/thewh1teagle/repos", "events_url": "https://api.github.com/users/thewh1teagle/events{/privacy}", "received_events_url": "https://api.github.com/users/thewh1teagle/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-11-11T02:51:10
2024-11-20T18:45:10
2024-11-20T18:45:10
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7607", "html_url": "https://github.com/ollama/ollama/pull/7607", "diff_url": "https://github.com/ollama/ollama/pull/7607.diff", "patch_url": "https://github.com/ollama/ollama/pull/7607.patch", "merged_at": "2024-11-20T18:45:10" }
Add [vibe](https://github.com/thewh1teagle/vibe) app which just added Ollama support
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7607/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5665
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5665/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5665/comments
https://api.github.com/repos/ollama/ollama/issues/5665/events
https://github.com/ollama/ollama/pull/5665
2,406,713,540
PR_kwDOJ0Z1Ps51SCMs
5,665
Refactor cmd.go for Improved Readability
{ "login": "hasitpbhatt", "id": 778585, "node_id": "MDQ6VXNlcjc3ODU4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/778585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hasitpbhatt", "html_url": "https://github.com/hasitpbhatt", "followers_url": "https://api.github.com/users/hasitpbhatt/followers", "following_url": "https://api.github.com/users/hasitpbhatt/following{/other_user}", "gists_url": "https://api.github.com/users/hasitpbhatt/gists{/gist_id}", "starred_url": "https://api.github.com/users/hasitpbhatt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hasitpbhatt/subscriptions", "organizations_url": "https://api.github.com/users/hasitpbhatt/orgs", "repos_url": "https://api.github.com/users/hasitpbhatt/repos", "events_url": "https://api.github.com/users/hasitpbhatt/events{/privacy}", "received_events_url": "https://api.github.com/users/hasitpbhatt/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
0
2024-07-13T05:26:36
2024-07-15T01:05:06
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5665", "html_url": "https://github.com/ollama/ollama/pull/5665", "diff_url": "https://github.com/ollama/ollama/pull/5665.diff", "patch_url": "https://github.com/ollama/ollama/pull/5665.patch", "merged_at": null }
This PR refactors cmd.go to improve readability by eliminating unnecessary nesting, removing redundant count variables, and replacing HasPrefix with TrimPrefix for path manipulation.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5665/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4626
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4626/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4626/comments
https://api.github.com/repos/ollama/ollama/issues/4626/events
https://github.com/ollama/ollama/issues/4626
2,316,594,368
I_kwDOJ0Z1Ps6KFGzA
4,626
about model quantization
{ "login": "andyyumiao", "id": 11346379, "node_id": "MDQ6VXNlcjExMzQ2Mzc5", "avatar_url": "https://avatars.githubusercontent.com/u/11346379?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andyyumiao", "html_url": "https://github.com/andyyumiao", "followers_url": "https://api.github.com/users/andyyumiao/followers", "following_url": "https://api.github.com/users/andyyumiao/following{/other_user}", "gists_url": "https://api.github.com/users/andyyumiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/andyyumiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andyyumiao/subscriptions", "organizations_url": "https://api.github.com/users/andyyumiao/orgs", "repos_url": "https://api.github.com/users/andyyumiao/repos", "events_url": "https://api.github.com/users/andyyumiao/events{/privacy}", "received_events_url": "https://api.github.com/users/andyyumiao/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
3
2024-05-25T02:24:42
2024-05-28T20:40:22
2024-05-28T20:40:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
What are the quantization parameters used for the llama3 model in Ollama? For example, llama3 version, quantization parameters, etc The llama3 8b version that I quantified using llama.cpp myself is not as good as the llama3 8b version that comes with Ollama, so I want to know the reason.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4626/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7328
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7328/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7328/comments
https://api.github.com/repos/ollama/ollama/issues/7328/events
https://github.com/ollama/ollama/issues/7328
2,607,539,111
I_kwDOJ0Z1Ps6ba-On
7,328
Performance degradation with 8B+ models on Windows Radeon
{ "login": "7shi", "id": 178381, "node_id": "MDQ6VXNlcjE3ODM4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/178381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/7shi", "html_url": "https://github.com/7shi", "followers_url": "https://api.github.com/users/7shi/followers", "following_url": "https://api.github.com/users/7shi/following{/other_user}", "gists_url": "https://api.github.com/users/7shi/gists{/gist_id}", "starred_url": "https://api.github.com/users/7shi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/7shi/subscriptions", "organizations_url": "https://api.github.com/users/7shi/orgs", "repos_url": "https://api.github.com/users/7shi/repos", "events_url": "https://api.github.com/users/7shi/events{/privacy}", "received_events_url": "https://api.github.com/users/7shi/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2024-10-23T07:14:50
2024-10-23T16:44:41
2024-10-23T16:44:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When running models 8B or larger on Windows with Radeon GPU, performance is slower than CPU-only mode, despite having sufficient VRAM available. Environment: - OS: Windows 11 Home [10.0.22631] - CPU: AMD Ryzen 5 5600X 6-Core Processor - GPU: Radeon RX 7600 XT - VRAM: 16GB Root Cause Investigation: I've identified that this is caused by a HIP SDK behavior where memory allocations larger than 4GB are being redirected to shared GPU memory instead of using dedicated VRAM. I've reported this behavior to the HIP team here: https://github.com/ROCm/HIP/issues/3644 Current Status: As this is a HIP-level issue, improvement in model performance will depend on resolution from the HIP team. Creating this issue for visibility and to help others who might encounter similar performance degradation with large models on Windows Radeon setups. Impact: - Models 8B and larger run slower than CPU-only mode - Available VRAM remains unused while slower shared memory is being utilized ![379019648-866210ef-4a2c-4525-9026-9f614e19694e](https://github.com/user-attachments/assets/07a1739a-3ee0-4b68-b515-b5ee6c0f4b6f) ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.3.14
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7328/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6689
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6689/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6689/comments
https://api.github.com/repos/ollama/ollama/issues/6689/events
https://github.com/ollama/ollama/issues/6689
2,511,934,082
I_kwDOJ0Z1Ps6VuRKC
6,689
Reflection 70B fix?
{ "login": "gileneusz", "id": 34601970, "node_id": "MDQ6VXNlcjM0NjAxOTcw", "avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gileneusz", "html_url": "https://github.com/gileneusz", "followers_url": "https://api.github.com/users/gileneusz/followers", "following_url": "https://api.github.com/users/gileneusz/following{/other_user}", "gists_url": "https://api.github.com/users/gileneusz/gists{/gist_id}", "starred_url": "https://api.github.com/users/gileneusz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gileneusz/subscriptions", "organizations_url": "https://api.github.com/users/gileneusz/orgs", "repos_url": "https://api.github.com/users/gileneusz/repos", "events_url": "https://api.github.com/users/gileneusz/events{/privacy}", "received_events_url": "https://api.github.com/users/gileneusz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
5
2024-09-07T17:24:38
2024-09-08T23:30:19
2024-09-08T23:30:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
There are rumors that Reflection model does not run properly on ollama, can anyone confirm it? comments here: https://www.reddit.com/r/LocalLLaMA/comments/1fa72an/reflectionllama3170b_available_on_ollama/
{ "login": "gileneusz", "id": 34601970, "node_id": "MDQ6VXNlcjM0NjAxOTcw", "avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gileneusz", "html_url": "https://github.com/gileneusz", "followers_url": "https://api.github.com/users/gileneusz/followers", "following_url": "https://api.github.com/users/gileneusz/following{/other_user}", "gists_url": "https://api.github.com/users/gileneusz/gists{/gist_id}", "starred_url": "https://api.github.com/users/gileneusz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gileneusz/subscriptions", "organizations_url": "https://api.github.com/users/gileneusz/orgs", "repos_url": "https://api.github.com/users/gileneusz/repos", "events_url": "https://api.github.com/users/gileneusz/events{/privacy}", "received_events_url": "https://api.github.com/users/gileneusz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6689/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5944
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5944/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5944/comments
https://api.github.com/repos/ollama/ollama/issues/5944/events
https://github.com/ollama/ollama/issues/5944
2,429,563,960
I_kwDOJ0Z1Ps6Q0DQ4
5,944
Most difficult error ever: : no suitable llama servers found.
{ "login": "Swephoenix", "id": 148555635, "node_id": "U_kgDOCNrHcw", "avatar_url": "https://avatars.githubusercontent.com/u/148555635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Swephoenix", "html_url": "https://github.com/Swephoenix", "followers_url": "https://api.github.com/users/Swephoenix/followers", "following_url": "https://api.github.com/users/Swephoenix/following{/other_user}", "gists_url": "https://api.github.com/users/Swephoenix/gists{/gist_id}", "starred_url": "https://api.github.com/users/Swephoenix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Swephoenix/subscriptions", "organizations_url": "https://api.github.com/users/Swephoenix/orgs", "repos_url": "https://api.github.com/users/Swephoenix/repos", "events_url": "https://api.github.com/users/Swephoenix/events{/privacy}", "received_events_url": "https://api.github.com/users/Swephoenix/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-07-25T10:05:37
2024-10-24T01:00:10
2024-07-26T20:23:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I've reinstalled Ollama several times but it won't fix the error I'm getting at startup when I manually in CMD write ollama run llama3:8b (or any other model which are listed and recognized by ollama). ![image](https://github.com/user-attachments/assets/e0bd27f4-698e-4a7a-81a4-b0ef90330e7b) ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version Latest
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5944/reactions", "total_count": 4, "+1": 3, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5944/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8282
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8282/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8282/comments
https://api.github.com/repos/ollama/ollama/issues/8282/events
https://github.com/ollama/ollama/issues/8282
2,765,170,432
I_kwDOJ0Z1Ps6k0ScA
8,282
DeepSeek VL v2
{ "login": "ddpasa", "id": 112642920, "node_id": "U_kgDOBrbLaA", "avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddpasa", "html_url": "https://github.com/ddpasa", "followers_url": "https://api.github.com/users/ddpasa/followers", "following_url": "https://api.github.com/users/ddpasa/following{/other_user}", "gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}", "starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions", "organizations_url": "https://api.github.com/users/ddpasa/orgs", "repos_url": "https://api.github.com/users/ddpasa/repos", "events_url": "https://api.github.com/users/ddpasa/events{/privacy}", "received_events_url": "https://api.github.com/users/ddpasa/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2025-01-01T17:09:25
2025-01-01T17:09:25
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/collections/deepseek-ai/deepseek-vl2-675c22accc456d3beb4613ab there are 3 versions: tiny, small and the default
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8282/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3614
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3614/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3614/comments
https://api.github.com/repos/ollama/ollama/issues/3614/events
https://github.com/ollama/ollama/issues/3614
2,239,786,761
I_kwDOJ0Z1Ps6FgG8J
3,614
API response content contains leading space before some non-alphabetical chars
{ "login": "Propheticus", "id": 6628064, "node_id": "MDQ6VXNlcjY2MjgwNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6628064?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Propheticus", "html_url": "https://github.com/Propheticus", "followers_url": "https://api.github.com/users/Propheticus/followers", "following_url": "https://api.github.com/users/Propheticus/following{/other_user}", "gists_url": "https://api.github.com/users/Propheticus/gists{/gist_id}", "starred_url": "https://api.github.com/users/Propheticus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Propheticus/subscriptions", "organizations_url": "https://api.github.com/users/Propheticus/orgs", "repos_url": "https://api.github.com/users/Propheticus/repos", "events_url": "https://api.github.com/users/Propheticus/events{/privacy}", "received_events_url": "https://api.github.com/users/Propheticus/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 7706482389, "node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q", "url": "https://api.github.com/repos/ollama/ollama/labels/api", "name": "api", "color": "bfdadc", "default": false, "description": "" } ]
open
false
null
[]
null
0
2024-04-12T10:44:07
2024-11-06T17:41:43
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When calling the /v1/chat/completions endpoint the response sometimes contains a leading space. e.g. when asking for a markdown table the first char is a `|` , or when asking for a quote and the first char is a `_` (to later end with another to make _italic_) the content returned often -but not always- looks like: ``` "message":{ "role":"assistant", "content":" \"_The only way to do great work is to love what you do._\" - Steve Jobs" }, "finish_reason":"stop" ``` The same happens when streaming data chunks: ``` data: {"id":"chatcmpl-139","object":"chat.completion.chunk","created":1712915479,"model":"mistral7bq5","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" **\""},"finish_reason":null}]} data: {"id":"chatcmpl-139","object":"chat.completion.chunk","created":1712915479,"model":"mistral7bq5","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"The"},"finish_reason":null}]} data: {"id":"chatcmpl-139","object":"chat.completion.chunk","created":1712915479,"model":"mistral7bq5","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" only"},"finish_reason":null}]} data: {"id":"chatcmpl-139","object":"chat.completion.chunk","created":1712915479,"model":"mistral7bq5","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" way"},"finish_reason":null}]} ``` Why is this a problem? A leading space ruins the header row of a markdown table. It's also not in line with the Open AI API specs. ### What did you expect to see? No leading spaces. e.g.: ``` "message":{ "role":"assistant", "content":"\"_The only way to do great work is to love what you do._\" - Steve Jobs" }, "finish_reason":"stop" ``` ### Steps to reproduce Calling the completions endpoint from either Obsidian (using BMO chatbot plugin) or from Notepad++ using Rest API to text plugin. Ask the model Mistral instruct 7B v0.2 Q5_K_M (gguf) to make me a markdown table or output text in quotes. In the syntax understood by "Rest API to text" NP++ plugin ``` http POST http://127.0.0.1:11434/v1/chat/completions **headers** content-type: application/json **RestApiToTextOptions** ShowResponseHeaders **body** { "messages": [ { "content": "You are a helpful assistant.", "role": "system" }, { "content": "make me a markdown table of 3 columns and 2 rows. Don't use a code block.", "role": "user" } ], "model": "mistral7bq5", "stream": true } ``` 50/50 chance of leading space. ### Are there any recent changes that introduced the issue? _No response_ ### OS Windows ### Architecture amd64 ### Platform _No response_ ### Ollama version 0.1.31 ### GPU AMD ### GPU info AMD Radeon RX 6800 XT ``` GcnArchName: gfx1030 Total Mem: 16918130688 ``` ``` ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon RX 6800 XT, compute capability 10.3, VMM: no ``` ### CPU AMD ### Other software _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3614/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4346
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4346/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4346/comments
https://api.github.com/repos/ollama/ollama/issues/4346/events
https://github.com/ollama/ollama/issues/4346
2,290,728,121
I_kwDOJ0Z1Ps6Iiby5
4,346
Ollama does not list installed models
{ "login": "javiergcim", "id": 52302482, "node_id": "MDQ6VXNlcjUyMzAyNDgy", "avatar_url": "https://avatars.githubusercontent.com/u/52302482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/javiergcim", "html_url": "https://github.com/javiergcim", "followers_url": "https://api.github.com/users/javiergcim/followers", "following_url": "https://api.github.com/users/javiergcim/following{/other_user}", "gists_url": "https://api.github.com/users/javiergcim/gists{/gist_id}", "starred_url": "https://api.github.com/users/javiergcim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/javiergcim/subscriptions", "organizations_url": "https://api.github.com/users/javiergcim/orgs", "repos_url": "https://api.github.com/users/javiergcim/repos", "events_url": "https://api.github.com/users/javiergcim/events{/privacy}", "received_events_url": "https://api.github.com/users/javiergcim/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
8
2024-05-11T06:58:29
2024-05-13T16:48:07
2024-05-13T16:48:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The command "ollama list" does not list the installed models on the system (at least those created from a local GGUF file), which prevents other utilities (for example, WebUI) from discovering them. However, the models are there and can be invoked by specifying their name explicitly. For example: "ollama run MyModel". ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.35
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4346/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2541
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2541/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2541/comments
https://api.github.com/repos/ollama/ollama/issues/2541/events
https://github.com/ollama/ollama/pull/2541
2,138,805,322
PR_kwDOJ0Z1Ps5nGQp4
2,541
fix: use requested model template
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-02-16T15:05:05
2024-02-16T19:02:13
2024-02-16T19:02:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2541", "html_url": "https://github.com/ollama/ollama/pull/2541", "diff_url": "https://github.com/ollama/ollama/pull/2541.diff", "patch_url": "https://github.com/ollama/ollama/pull/2541.patch", "merged_at": null }
As reported in scenario 1 of #2492 When a request was made to a model than inherits from the currently loaded model the system and template were not updated in the `/chat` endpoint. The fix is to use the requested model rather than the loaded one. Steps to reproduce: 1. Create a model that overrides the system prompt of another model: ``` FROM phi SYSTEM """I want you to speak French only.""" ``` `ollama create phi-french -f ~/models/phi-french/Modelfile` 2. Run the base model `ollama run phi` 3. Quit the repl and run the custom model ``` ollama run phi-french ``` The system message from the base model was not changed, as the loaded model did not change.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2541/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/2541/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8515
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8515/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8515/comments
https://api.github.com/repos/ollama/ollama/issues/8515/events
https://github.com/ollama/ollama/pull/8515
2,801,364,312
PR_kwDOJ0Z1Ps6Id8RD
8,515
Remove tfs_z from documentation.
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2025-01-21T10:21:42
2025-01-21T17:36:01
2025-01-21T17:29:00
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8515", "html_url": "https://github.com/ollama/ollama/pull/8515", "diff_url": "https://github.com/ollama/ollama/pull/8515.diff", "patch_url": "https://github.com/ollama/ollama/pull/8515.patch", "merged_at": "2025-01-21T17:29:00" }
tfs_z was removed from llama.cpp in https://github.com/ggerganov/llama.cpp/pull/10071 and the vendor sync in https://github.com/ollama/ollama/pull/7875 propagated it into ollama. Fixes: https://github.com/ollama/ollama/issues/8514
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8515/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8515/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3644
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3644/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3644/comments
https://api.github.com/repos/ollama/ollama/issues/3644/events
https://github.com/ollama/ollama/issues/3644
2,243,012,765
I_kwDOJ0Z1Ps6Fsaid
3,644
Is the model's PROMPT maximum number of tokens determined by the inference tool?
{ "login": "17Reset", "id": 122418720, "node_id": "U_kgDOB0v2IA", "avatar_url": "https://avatars.githubusercontent.com/u/122418720?v=4", "gravatar_id": "", "url": "https://api.github.com/users/17Reset", "html_url": "https://github.com/17Reset", "followers_url": "https://api.github.com/users/17Reset/followers", "following_url": "https://api.github.com/users/17Reset/following{/other_user}", "gists_url": "https://api.github.com/users/17Reset/gists{/gist_id}", "starred_url": "https://api.github.com/users/17Reset/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/17Reset/subscriptions", "organizations_url": "https://api.github.com/users/17Reset/orgs", "repos_url": "https://api.github.com/users/17Reset/repos", "events_url": "https://api.github.com/users/17Reset/events{/privacy}", "received_events_url": "https://api.github.com/users/17Reset/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2024-04-15T08:16:07
2024-04-29T08:57:57
2024-04-15T19:25:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When I use ollama to reason about my Smuag-72B's model, there is no output when the input prompt has 150tokens, but the output is normal when scaled down to about 100.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3644/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1943
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1943/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1943/comments
https://api.github.com/repos/ollama/ollama/issues/1943/events
https://github.com/ollama/ollama/issues/1943
2,078,067,881
I_kwDOJ0Z1Ps573Myp
1,943
[Feature] Add the ability to run a command or start a shell from the interactive mode
{ "login": "jimscard", "id": 26580570, "node_id": "MDQ6VXNlcjI2NTgwNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/26580570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jimscard", "html_url": "https://github.com/jimscard", "followers_url": "https://api.github.com/users/jimscard/followers", "following_url": "https://api.github.com/users/jimscard/following{/other_user}", "gists_url": "https://api.github.com/users/jimscard/gists{/gist_id}", "starred_url": "https://api.github.com/users/jimscard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jimscard/subscriptions", "organizations_url": "https://api.github.com/users/jimscard/orgs", "repos_url": "https://api.github.com/users/jimscard/repos", "events_url": "https://api.github.com/users/jimscard/events{/privacy}", "received_events_url": "https://api.github.com/users/jimscard/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-01-12T04:54:06
2024-03-11T19:19:25
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Many times, I'll go into the CLI client interactive mode, e.g., `ollama run [model]` to get help on doing something. Then, I have to start up another terminal window in order to actually do it. To make this more user-friendly, two keyboard shortcuts should be added to the ollama run interactive mode -- `!` and `shell`. (It would be fine if they have to be preceded by a slash, e.g., `/!` and `/shell`. The first option (referred to as "bang") takes a command line, and upon the user pressing enter, spawns that command. When the command completes, the user is prompted to continue For example, `/!ls` runs the `ls` command and displays the results, and then prompts "Press Enter to continue" before it returns to the ollama run prompt. The second option spawns the user's preferred shell. This should behave similarly to the analogous behavior in vim, e.g., `:!ps` shows the list of processes.... `:shell` brings up my shell; exiting with ctrl+d etc. returns to the ollama run interactive prompt.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1943/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1943/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3897
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3897/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3897/comments
https://api.github.com/repos/ollama/ollama/issues/3897/events
https://github.com/ollama/ollama/pull/3897
2,262,417,048
PR_kwDOJ0Z1Ps5tqbfk
3,897
add information about compiling with intel mkl
{ "login": "kannon92", "id": 3780425, "node_id": "MDQ6VXNlcjM3ODA0MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/3780425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kannon92", "html_url": "https://github.com/kannon92", "followers_url": "https://api.github.com/users/kannon92/followers", "following_url": "https://api.github.com/users/kannon92/following{/other_user}", "gists_url": "https://api.github.com/users/kannon92/gists{/gist_id}", "starred_url": "https://api.github.com/users/kannon92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kannon92/subscriptions", "organizations_url": "https://api.github.com/users/kannon92/orgs", "repos_url": "https://api.github.com/users/kannon92/repos", "events_url": "https://api.github.com/users/kannon92/events{/privacy}", "received_events_url": "https://api.github.com/users/kannon92/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-04-25T00:58:01
2024-06-04T13:04:14
2024-05-06T21:48:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3897", "html_url": "https://github.com/ollama/ollama/pull/3897", "diff_url": "https://github.com/ollama/ollama/pull/3897.diff", "patch_url": "https://github.com/ollama/ollama/pull/3897.patch", "merged_at": null }
llama.cpp has some information about how to compile with non gpu options. I added a section on blas options for non gpu hosts. I use intel mkl and compile ollama (and llama.cpp) with this library.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3897/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6212
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6212/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6212/comments
https://api.github.com/repos/ollama/ollama/issues/6212/events
https://github.com/ollama/ollama/issues/6212
2,451,872,643
I_kwDOJ0Z1Ps6SJJuD
6,212
show --modelfile (still) doesn't properly quote MESSAGE statements
{ "login": "Maltz42", "id": 20978744, "node_id": "MDQ6VXNlcjIwOTc4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/20978744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Maltz42", "html_url": "https://github.com/Maltz42", "followers_url": "https://api.github.com/users/Maltz42/followers", "following_url": "https://api.github.com/users/Maltz42/following{/other_user}", "gists_url": "https://api.github.com/users/Maltz42/gists{/gist_id}", "starred_url": "https://api.github.com/users/Maltz42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Maltz42/subscriptions", "organizations_url": "https://api.github.com/users/Maltz42/orgs", "repos_url": "https://api.github.com/users/Maltz42/repos", "events_url": "https://api.github.com/users/Maltz42/events{/privacy}", "received_events_url": "https://api.github.com/users/Maltz42/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-08-07T00:03:29
2024-08-07T05:13:06
2024-08-07T05:13:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The patch added to v0.3.3 for issue #6103 didn't work, and actually made the situation harder to mitigate with find/replace. Request that patch be rolled back, the issue be re-opened, and quoting of MESSAGE strings be revisited and tested more thoroughly. Thanks! (Or let me know if I should copy my posts from that issue here and just start anew.) ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.4
{ "login": "Maltz42", "id": 20978744, "node_id": "MDQ6VXNlcjIwOTc4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/20978744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Maltz42", "html_url": "https://github.com/Maltz42", "followers_url": "https://api.github.com/users/Maltz42/followers", "following_url": "https://api.github.com/users/Maltz42/following{/other_user}", "gists_url": "https://api.github.com/users/Maltz42/gists{/gist_id}", "starred_url": "https://api.github.com/users/Maltz42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Maltz42/subscriptions", "organizations_url": "https://api.github.com/users/Maltz42/orgs", "repos_url": "https://api.github.com/users/Maltz42/repos", "events_url": "https://api.github.com/users/Maltz42/events{/privacy}", "received_events_url": "https://api.github.com/users/Maltz42/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6212/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7046
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7046/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7046/comments
https://api.github.com/repos/ollama/ollama/issues/7046/events
https://github.com/ollama/ollama/issues/7046
2,556,762,944
I_kwDOJ0Z1Ps6YZRtA
7,046
Loading Llama model to a Google Cloud Run Ollama Container through a Dockerfile
{ "login": "waynemorphic", "id": 37283450, "node_id": "MDQ6VXNlcjM3MjgzNDUw", "avatar_url": "https://avatars.githubusercontent.com/u/37283450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/waynemorphic", "html_url": "https://github.com/waynemorphic", "followers_url": "https://api.github.com/users/waynemorphic/followers", "following_url": "https://api.github.com/users/waynemorphic/following{/other_user}", "gists_url": "https://api.github.com/users/waynemorphic/gists{/gist_id}", "starred_url": "https://api.github.com/users/waynemorphic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/waynemorphic/subscriptions", "organizations_url": "https://api.github.com/users/waynemorphic/orgs", "repos_url": "https://api.github.com/users/waynemorphic/repos", "events_url": "https://api.github.com/users/waynemorphic/events{/privacy}", "received_events_url": "https://api.github.com/users/waynemorphic/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6677677816, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A", "url": "https://api.github.com/repos/ollama/ollama/labels/docker", "name": "docker", "color": "0052CC", "default": false, "description": "Issues relating to using ollama in containers" } ]
closed
false
null
[]
null
2
2024-09-30T13:54:12
2024-09-30T19:00:11
2024-09-30T19:00:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have been trying to Dockerize Ollama and consequently load the Llama3.1 model into the Google Cloud Run deployment. While Ollama is running as expected in Cloud Run, the model is not loaded as expected since hitting `v1/models` returns a null result. I have a hacky solution with Compute Engine where I have an SSH connection to run the Dockerized image and consequently to pull and run the model. However, this solution will neither be cost-effective nor efficient in the long term. I want help figuring out how to load LLMs into Ollama through a single Dockerfile that will be deployed to Google Cloud Run if this is possible. Here is my current Dockerfile. ```Dockerfile FROM ollama/ollama WORKDIR /app RUN apt-get update && apt-get install -y wget && apt-get install -y --no-install-recommends git curl ENV DEBIAN_FRONTEND=noninteractive ENV OLLAMA_KEEP_ALIVE=24h EXPOSE 11434 VOLUME [ "./ollama/ollama:/root/.ollama" ] ENTRYPOINT ["/bin/bash", "-c", "ollama serve & sleep 5 && ollama run llama3.1 && tail -f /dev/null"] ``` ### OS Docker ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7046/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7589
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7589/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7589/comments
https://api.github.com/repos/ollama/ollama/issues/7589/events
https://github.com/ollama/ollama/issues/7589
2,646,531,312
I_kwDOJ0Z1Ps6dvtzw
7,589
Adding option to default `/clear` after each query
{ "login": "soulrrrrr", "id": 49684138, "node_id": "MDQ6VXNlcjQ5Njg0MTM4", "avatar_url": "https://avatars.githubusercontent.com/u/49684138?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soulrrrrr", "html_url": "https://github.com/soulrrrrr", "followers_url": "https://api.github.com/users/soulrrrrr/followers", "following_url": "https://api.github.com/users/soulrrrrr/following{/other_user}", "gists_url": "https://api.github.com/users/soulrrrrr/gists{/gist_id}", "starred_url": "https://api.github.com/users/soulrrrrr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soulrrrrr/subscriptions", "organizations_url": "https://api.github.com/users/soulrrrrr/orgs", "repos_url": "https://api.github.com/users/soulrrrrr/repos", "events_url": "https://api.github.com/users/soulrrrrr/events{/privacy}", "received_events_url": "https://api.github.com/users/soulrrrrr/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-11-09T20:44:31
2024-11-13T19:59:40
2024-11-13T19:59:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
As there is `/clear` command to clear the session context, it would be great if there is an option to set to automatically run `/clear` for every query. I am using LLM as translator so this feature might be helpful. Since when the prompt gets longer and longer, the inference time increases.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7589/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3892
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3892/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3892/comments
https://api.github.com/repos/ollama/ollama/issues/3892/events
https://github.com/ollama/ollama/pull/3892
2,262,249,719
PR_kwDOJ0Z1Ps5tp2si
3,892
refactor modelfile parser
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-04-24T21:52:23
2024-05-03T00:04:48
2024-05-03T00:04:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3892", "html_url": "https://github.com/ollama/ollama/pull/3892", "diff_url": "https://github.com/ollama/ollama/pull/3892.diff", "patch_url": "https://github.com/ollama/ollama/pull/3892.patch", "merged_at": "2024-05-03T00:04:47" }
split from #3833 resolves #3977
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3892/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3796
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3796/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3796/comments
https://api.github.com/repos/ollama/ollama/issues/3796/events
https://github.com/ollama/ollama/pull/3796
2,255,060,488
PR_kwDOJ0Z1Ps5tRf2m
3,796
feat: enable OLLAMA Arc GPU support with SYCL backend
{ "login": "gamunu", "id": 4501687, "node_id": "MDQ6VXNlcjQ1MDE2ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/4501687?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gamunu", "html_url": "https://github.com/gamunu", "followers_url": "https://api.github.com/users/gamunu/followers", "following_url": "https://api.github.com/users/gamunu/following{/other_user}", "gists_url": "https://api.github.com/users/gamunu/gists{/gist_id}", "starred_url": "https://api.github.com/users/gamunu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gamunu/subscriptions", "organizations_url": "https://api.github.com/users/gamunu/orgs", "repos_url": "https://api.github.com/users/gamunu/repos", "events_url": "https://api.github.com/users/gamunu/events{/privacy}", "received_events_url": "https://api.github.com/users/gamunu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
21
2024-04-21T12:55:04
2024-06-09T17:59:57
2024-06-09T17:59:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3796", "html_url": "https://github.com/ollama/ollama/pull/3796", "diff_url": "https://github.com/ollama/ollama/pull/3796.diff", "patch_url": "https://github.com/ollama/ollama/pull/3796.patch", "merged_at": null }
This is based on the original PR created by @felipeagc:main https://github.com/ollama/ollama/pull/2458. It seems that the work on that pull request has come to a halt. I would like to work on this project in the next few days and accelerate the progress. I have tested the build with Ubuntu LTS and GPU Arc770. I'm happy to progress the PR with the community feedback. ```log time=2024-04-21T17:39:51.870+05:30 level=INFO source=images.go:817 msg="total blobs: 0" time=2024-04-21T17:39:51.870+05:30 level=INFO source=images.go:824 msg="total unused blobs removed: 0" time=2024-04-21T17:39:51.874+05:30 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32-17-g91f1201-dirty)" time=2024-04-21T17:39:51.874+05:30 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama2497595442/runners time=2024-04-21T17:39:55.586+05:30 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60002 cpu]" time=2024-04-21T17:39:55.586+05:30 level=INFO source=gpu.go:140 msg="Detecting GPU type" time=2024-04-21T17:39:55.586+05:30 level=INFO source=gpu.go:320 msg="Searching for GPU management library libcudart.so*" time=2024-04-21T17:39:55.588+05:30 level=INFO source=gpu.go:366 msg="Discovered GPU libraries: [/tmp/ollama2497595442/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-21T17:39:55.601+05:30 level=INFO source=gpu.go:395 msg="Unable to load cudart CUDA management library /tmp/ollama2497595442/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 100" time=2024-04-21T17:39:55.601+05:30 level=INFO source=gpu.go:320 msg="Searching for GPU management library libnvidia-ml.so" time=2024-04-21T17:39:55.603+05:30 level=INFO source=gpu.go:366 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.535.171.04]" time=2024-04-21T17:39:55.608+05:30 level=INFO source=gpu.go:378 msg="Unable to load NVML management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.535.171.04: nvml vram init failure: 9" time=2024-04-21T17:39:55.608+05:30 level=INFO source=gpu.go:320 msg="Searching for GPU management library libze_intel_gpu.so" time=2024-04-21T17:39:55.610+05:30 level=INFO source=gpu.go:366 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libze_intel_gpu.so.1.3.28202.51]" time=2024-04-21T17:39:55.662+05:30 level=INFO source=gpu.go:166 msg="Intel GPU detected" time=2024-04-21T17:39:55.662+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ```
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3796/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3796/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/42
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/42/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/42/comments
https://api.github.com/repos/ollama/ollama/issues/42/events
https://github.com/ollama/ollama/pull/42
1,792,018,838
PR_kwDOJ0Z1Ps5U1d6p
42
free llama model
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-06T18:15:23
2023-07-06T18:16:25
2023-07-06T18:16:22
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/42", "html_url": "https://github.com/ollama/ollama/pull/42", "diff_url": "https://github.com/ollama/ollama/pull/42.diff", "patch_url": "https://github.com/ollama/ollama/pull/42.patch", "merged_at": "2023-07-06T18:16:22" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/42/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/42/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1413
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1413/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1413/comments
https://api.github.com/repos/ollama/ollama/issues/1413/events
https://github.com/ollama/ollama/issues/1413
2,029,887,591
I_kwDOJ0Z1Ps54_aBn
1,413
OOM Error on Bad CUDA Driver
{ "login": "farhanhubble", "id": 761785, "node_id": "MDQ6VXNlcjc2MTc4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/761785?v=4", "gravatar_id": "", "url": "https://api.github.com/users/farhanhubble", "html_url": "https://github.com/farhanhubble", "followers_url": "https://api.github.com/users/farhanhubble/followers", "following_url": "https://api.github.com/users/farhanhubble/following{/other_user}", "gists_url": "https://api.github.com/users/farhanhubble/gists{/gist_id}", "starred_url": "https://api.github.com/users/farhanhubble/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/farhanhubble/subscriptions", "organizations_url": "https://api.github.com/users/farhanhubble/orgs", "repos_url": "https://api.github.com/users/farhanhubble/repos", "events_url": "https://api.github.com/users/farhanhubble/events{/privacy}", "received_events_url": "https://api.github.com/users/farhanhubble/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2023-12-07T04:44:03
2024-01-08T21:42:03
2024-01-08T21:42:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
**Ollama version**: 0.1.1 **Reproduction**: - `nvidia-smi` ``` Failed to initialize NVML: Driver/library version mismatch NVML library version: 535.129 ``` - Run server ``` IP='0.0.0.0' PORT='11434' EXE='bin/ollama' ARGS='serve' ENV="OLLAMA_HOST=$IP:$PORT'" CMD="$ENV $EXE $ARGS" echo Running $CMD eval $CMD ``` - Try embedding a slightly long payload ``` import requests response = requests.post('http://localhost:11434/api/embeddings', json={ 'model': 'llama2:latest', 'prompt': 'Here is an article about llamas...'*30 }) ``` - Error ``` CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:4856: out of memory ``` - Logs ``` {"timestamp":1701923173,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":32,"total_threads":64,"system_info":"AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | "} llama.cpp: loading model from [REDACTED] llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 4096 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 32 llama_model_load_internal: n_head_kv = 32 llama_model_load_internal: n_layer = 32 llama_model_load_internal: n_rot = 128 llama_model_load_internal: n_gqa = 1 llama_model_load_internal: rnorm_eps = 5.0e-06 llama_model_load_internal: n_ff = 11008 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: model size = 7B llama_model_load_internal: ggml ctx size = 0.08 MB llama_model_load_internal: using CUDA for GPU acceleration llama_model_load_internal: mem required = 4013.73 MB (+ 1024.00 MB per state) llama_model_load_internal: offloading 0 repeating layers to GPU llama_model_load_internal: offloaded 0/35 layers to GPU llama_model_load_internal: total VRAM used: 384 MB llama_new_context_with_model: kv self size = 1024.00 MB ``` - Fix: Hide GPU with ` CUDA_VISIBLE_DEVICES=''`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1413/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5243
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5243/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5243/comments
https://api.github.com/repos/ollama/ollama/issues/5243/events
https://github.com/ollama/ollama/pull/5243
2,368,869,000
PR_kwDOJ0Z1Ps5zS1e1
5,243
Fix use_mmap for modefiles
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-06-23T20:02:23
2024-07-03T20:59:46
2024-07-03T20:59:42
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5243", "html_url": "https://github.com/ollama/ollama/pull/5243", "diff_url": "https://github.com/ollama/ollama/pull/5243.diff", "patch_url": "https://github.com/ollama/ollama/pull/5243.patch", "merged_at": "2024-07-03T20:59:42" }
PR #5205 was incomplete and missed handling numeric json values. This switches to a pointer type to represent undefined as nil. Fixes #5198 ``` % cat use_mmap.modelfile FROM library/llama2 PARAMETER use_mmap false % ollama create test -f ./use_mmap.modelfile transferring model data using existing layer sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 ... writing manifest success % ollama run test >>> Send a message (/? for help) % grep "starting llama server" server.log time=2024-06-23T12:59:32.057-07:00 level=INFO source=server.go:363 msg="starting llama server" cmd="/tmp/ollama4152649118/runners/cpu_avx2/ollama_llama_server --model /home/daniel/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1 --port 34091" ```
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5243/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7036
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7036/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7036/comments
https://api.github.com/repos/ollama/ollama/issues/7036/events
https://github.com/ollama/ollama/issues/7036
2,554,992,877
I_kwDOJ0Z1Ps6YShjt
7,036
Error creating the manifest
{ "login": "seblessa", "id": 93839108, "node_id": "U_kgDOBZffBA", "avatar_url": "https://avatars.githubusercontent.com/u/93839108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seblessa", "html_url": "https://github.com/seblessa", "followers_url": "https://api.github.com/users/seblessa/followers", "following_url": "https://api.github.com/users/seblessa/following{/other_user}", "gists_url": "https://api.github.com/users/seblessa/gists{/gist_id}", "starred_url": "https://api.github.com/users/seblessa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seblessa/subscriptions", "organizations_url": "https://api.github.com/users/seblessa/orgs", "repos_url": "https://api.github.com/users/seblessa/repos", "events_url": "https://api.github.com/users/seblessa/events{/privacy}", "received_events_url": "https://api.github.com/users/seblessa/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
null
[]
null
1
2024-09-29T15:36:19
2024-10-04T17:10:30
2024-10-04T17:10:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello! I'm trying to create a custom model in ollama from a gguf file. I'm using the Modelfile from the example in the ReadME: ``` ~/example$ cat Modelfile FROM ./llama.gguf ``` When using the create command the ouput seems fine and the model is created. ```` ~/example$ ollama create example -f Modelfile transferring model data 100% using existing layer sha256:91bb99d2cc00b169f1a22f6e4ea87532faa5a8399d9496489793f582b5346f00 using autodetected template llama3-instruct using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb using existing layer sha256:b438d145ccf05e8d943b530a43a066311750eb4d428c3b1bdff107454a27cab4 writing manifest success ```` Everything seems fine: ``` ~/example$ ollama list NAME ID SIZE MODIFIED example:latest f80b565a0320 48 MB 4 minutes ago llama3.1:70b c0df3564cfe8 39 GB 13 days ago llama3.1:8b 42182419e950 4.7 GB 2 weeks ago ``` But when I try to run it: ``` ~/example$ ollama run example pulling manifest Error: pull model manifest: file does not exist ``` And when I list the dir the manifest is really not there. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.12
{ "login": "seblessa", "id": 93839108, "node_id": "U_kgDOBZffBA", "avatar_url": "https://avatars.githubusercontent.com/u/93839108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seblessa", "html_url": "https://github.com/seblessa", "followers_url": "https://api.github.com/users/seblessa/followers", "following_url": "https://api.github.com/users/seblessa/following{/other_user}", "gists_url": "https://api.github.com/users/seblessa/gists{/gist_id}", "starred_url": "https://api.github.com/users/seblessa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seblessa/subscriptions", "organizations_url": "https://api.github.com/users/seblessa/orgs", "repos_url": "https://api.github.com/users/seblessa/repos", "events_url": "https://api.github.com/users/seblessa/events{/privacy}", "received_events_url": "https://api.github.com/users/seblessa/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7036/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4043
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4043/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4043/comments
https://api.github.com/repos/ollama/ollama/issues/4043/events
https://github.com/ollama/ollama/issues/4043
2,270,894,070
I_kwDOJ0Z1Ps6HWxf2
4,043
having error while running llama2 on ollama
{ "login": "prateemnaskar", "id": 168468278, "node_id": "U_kgDOCgqfNg", "avatar_url": "https://avatars.githubusercontent.com/u/168468278?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prateemnaskar", "html_url": "https://github.com/prateemnaskar", "followers_url": "https://api.github.com/users/prateemnaskar/followers", "following_url": "https://api.github.com/users/prateemnaskar/following{/other_user}", "gists_url": "https://api.github.com/users/prateemnaskar/gists{/gist_id}", "starred_url": "https://api.github.com/users/prateemnaskar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prateemnaskar/subscriptions", "organizations_url": "https://api.github.com/users/prateemnaskar/orgs", "repos_url": "https://api.github.com/users/prateemnaskar/repos", "events_url": "https://api.github.com/users/prateemnaskar/events{/privacy}", "received_events_url": "https://api.github.com/users/prateemnaskar/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-04-30T08:53:02
2024-05-21T17:41:25
2024-05-21T17:41:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
After running the command "ollama run llama2" in command prompt (m using windows), it says :- Error: llama runner process no longer running: 3221225785 how to resolve this issues?
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4043/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2933
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2933/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2933/comments
https://api.github.com/repos/ollama/ollama/issues/2933/events
https://github.com/ollama/ollama/pull/2933
2,168,968,501
PR_kwDOJ0Z1Ps5otBzn
2,933
Update main.py print summary only
{ "login": "jliu015", "id": 149941742, "node_id": "U_kgDOCO_t7g", "avatar_url": "https://avatars.githubusercontent.com/u/149941742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jliu015", "html_url": "https://github.com/jliu015", "followers_url": "https://api.github.com/users/jliu015/followers", "following_url": "https://api.github.com/users/jliu015/following{/other_user}", "gists_url": "https://api.github.com/users/jliu015/gists{/gist_id}", "starred_url": "https://api.github.com/users/jliu015/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jliu015/subscriptions", "organizations_url": "https://api.github.com/users/jliu015/orgs", "repos_url": "https://api.github.com/users/jliu015/repos", "events_url": "https://api.github.com/users/jliu015/events{/privacy}", "received_events_url": "https://api.github.com/users/jliu015/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-03-05T11:32:54
2024-11-21T09:26:51
2024-11-21T09:26:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2933", "html_url": "https://github.com/ollama/ollama/pull/2933", "diff_url": "https://github.com/ollama/ollama/pull/2933.diff", "patch_url": "https://github.com/ollama/ollama/pull/2933.patch", "merged_at": null }
The original program printed both the input document and its summary. The input document is very long and its summary hides at the end. It really costed me some time to extract the summary by my eyes. >>> type(result) <class 'dict'> >>> result.keys() dict_keys(['input_documents', 'output_text']) BTW, the deprecated functions are also adjusted.
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2933/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8685
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8685/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8685/comments
https://api.github.com/repos/ollama/ollama/issues/8685/events
https://github.com/ollama/ollama/issues/8685
2,819,830,645
I_kwDOJ0Z1Ps6oEzN1
8,685
Request to change the file location and model path, and also gui
{ "login": "Bostoneary", "id": 96782219, "node_id": "U_kgDOBcTHiw", "avatar_url": "https://avatars.githubusercontent.com/u/96782219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bostoneary", "html_url": "https://github.com/Bostoneary", "followers_url": "https://api.github.com/users/Bostoneary/followers", "following_url": "https://api.github.com/users/Bostoneary/following{/other_user}", "gists_url": "https://api.github.com/users/Bostoneary/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bostoneary/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bostoneary/subscriptions", "organizations_url": "https://api.github.com/users/Bostoneary/orgs", "repos_url": "https://api.github.com/users/Bostoneary/repos", "events_url": "https://api.github.com/users/Bostoneary/events{/privacy}", "received_events_url": "https://api.github.com/users/Bostoneary/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2025-01-30T03:38:48
2025-01-30T03:56:41
2025-01-30T03:56:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This software is automatically install on the default path in my C disk. And all model is download to specific path in C. However, there is limited space in my disk C, can we change the software install location and the model download path? And it is possible to have a gui of this softeware? Hope this can be better one day.
{ "login": "Bostoneary", "id": 96782219, "node_id": "U_kgDOBcTHiw", "avatar_url": "https://avatars.githubusercontent.com/u/96782219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bostoneary", "html_url": "https://github.com/Bostoneary", "followers_url": "https://api.github.com/users/Bostoneary/followers", "following_url": "https://api.github.com/users/Bostoneary/following{/other_user}", "gists_url": "https://api.github.com/users/Bostoneary/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bostoneary/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bostoneary/subscriptions", "organizations_url": "https://api.github.com/users/Bostoneary/orgs", "repos_url": "https://api.github.com/users/Bostoneary/repos", "events_url": "https://api.github.com/users/Bostoneary/events{/privacy}", "received_events_url": "https://api.github.com/users/Bostoneary/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8685/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6082
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6082/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6082/comments
https://api.github.com/repos/ollama/ollama/issues/6082/events
https://github.com/ollama/ollama/issues/6082
2,438,927,742
I_kwDOJ0Z1Ps6RXxV-
6,082
why wsarecv: An existing connection was forcibly closed by the remote host ollama windows preview
{ "login": "springsuu", "id": 170060937, "node_id": "U_kgDOCiLsiQ", "avatar_url": "https://avatars.githubusercontent.com/u/170060937?v=4", "gravatar_id": "", "url": "https://api.github.com/users/springsuu", "html_url": "https://github.com/springsuu", "followers_url": "https://api.github.com/users/springsuu/followers", "following_url": "https://api.github.com/users/springsuu/following{/other_user}", "gists_url": "https://api.github.com/users/springsuu/gists{/gist_id}", "starred_url": "https://api.github.com/users/springsuu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/springsuu/subscriptions", "organizations_url": "https://api.github.com/users/springsuu/orgs", "repos_url": "https://api.github.com/users/springsuu/repos", "events_url": "https://api.github.com/users/springsuu/events{/privacy}", "received_events_url": "https://api.github.com/users/springsuu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-07-31T01:30:10
2024-08-01T22:22:28
2024-08-01T22:22:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3/manifests/latest": read tcp 192.168.2.23:51514->172.67.182.229:443: wsarecv: An existing connection was forcibly closed by the remote host ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6082/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6233
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6233/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6233/comments
https://api.github.com/repos/ollama/ollama/issues/6233/events
https://github.com/ollama/ollama/issues/6233
2,453,537,316
I_kwDOJ0Z1Ps6SPgIk
6,233
Strange! Each request consumes an additional 2 seconds when I used /api/embed
{ "login": "AlbertXu233", "id": 49802174, "node_id": "MDQ6VXNlcjQ5ODAyMTc0", "avatar_url": "https://avatars.githubusercontent.com/u/49802174?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlbertXu233", "html_url": "https://github.com/AlbertXu233", "followers_url": "https://api.github.com/users/AlbertXu233/followers", "following_url": "https://api.github.com/users/AlbertXu233/following{/other_user}", "gists_url": "https://api.github.com/users/AlbertXu233/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlbertXu233/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlbertXu233/subscriptions", "organizations_url": "https://api.github.com/users/AlbertXu233/orgs", "repos_url": "https://api.github.com/users/AlbertXu233/repos", "events_url": "https://api.github.com/users/AlbertXu233/events{/privacy}", "received_events_url": "https://api.github.com/users/AlbertXu233/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng", "url": "https://api.github.com/repos/ollama/ollama/labels/performance", "name": "performance", "color": "A5B5C6", "default": false, "description": "" }, { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw", "url": "https://api.github.com/repos/ollama/ollama/labels/networking", "name": "networking", "color": "0B5368", "default": false, "description": "Issues relating to ollama pull and push" } ]
closed
false
null
[]
null
6
2024-08-07T13:48:11
2024-09-05T18:51:42
2024-09-05T18:51:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? <img width="809" alt="image" src="https://github.com/user-attachments/assets/d649cc9e-7b95-4146-aeef-4eb564be7047"> OS: win11 device: intel i5 1335U with only iGPU embedding model: shaw/dmeta-embedding-zh-q4 I test with very simple input: ["你好"] ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.3.4
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6233/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7188
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7188/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7188/comments
https://api.github.com/repos/ollama/ollama/issues/7188/events
https://github.com/ollama/ollama/issues/7188
2,583,662,665
I_kwDOJ0Z1Ps6Z_5BJ
7,188
Bad juju creating a model (the llama.cpp generated file starts with "GGUF")
{ "login": "robbiemu", "id": 248927, "node_id": "MDQ6VXNlcjI0ODkyNw==", "avatar_url": "https://avatars.githubusercontent.com/u/248927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/robbiemu", "html_url": "https://github.com/robbiemu", "followers_url": "https://api.github.com/users/robbiemu/followers", "following_url": "https://api.github.com/users/robbiemu/following{/other_user}", "gists_url": "https://api.github.com/users/robbiemu/gists{/gist_id}", "starred_url": "https://api.github.com/users/robbiemu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/robbiemu/subscriptions", "organizations_url": "https://api.github.com/users/robbiemu/orgs", "repos_url": "https://api.github.com/users/robbiemu/repos", "events_url": "https://api.github.com/users/robbiemu/events{/privacy}", "received_events_url": "https://api.github.com/users/robbiemu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
12
2024-10-13T05:26:22
2024-10-15T17:57:03
2024-10-15T00:06:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have the [2b base model of Salamandra](https://huggingface.co/robbiemu/salamandra-2b) quantized to different weights, but I am getting an error creating it. ``` ollama create salamandra:2b_bf16 -f ./Modelfile transferring model data 100% Error: invalid file magic ``` I hesitate to even use the chat template because it is a base model, not a chat model, so its chat looks kinda erratic. ``` > hola, puedes decirme por qué el sol es amarillo? 2014-03-09 Hola! El color del Sol se debe a que la luz solar tiene una longitud de onda larga (de unos 580 nm) y corta (entre los 400nm y los 760nm). La radiación visible es aquella cuya longitud de onda está entre el rojo y el azul, por lo tanto las longitudes de onda más cortas son rojas. El Sol emite luz en todas direcciones pero la mayor > ``` but its text generation looks good (aligned with what you expect from a base model). this is a log of a full run in case the details are pertinent to the above magic ``` llama-cli -m ./salamandra-2b_bf16.gguf --ctx-size 8192 --rope-freq-base 10000.0 --top-p 0.95 --repeat-penalty 1.2 --temp 0.1 --n-predict 128 --top-k 40 -p "hola, puedes decirme por qué el sol es amarillo?" build: 3889 (b6d6c528) with Apple clang version 16.0.0 (clang-1600.0.26.3) for arm64-apple-darwin24.0.0 main: warning: changing RoPE frequency base to 10000. main: llama backend init main: load the model and apply lora adapter, if any llama_model_loader: loaded meta data with 29 key-value pairs and 219 tensors from ./salamandra-2b_bf16.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 2.3B llama_model_loader: - kv 3: general.license str = apache-2.0 llama_model_loader: - kv 4: general.tags arr[str,1] = ["text-generation"] llama_model_loader: - kv 5: general.languages arr[str,36] = ["bg", "ca", "code", "cs", "cy", "da"... llama_model_loader: - kv 6: llama.block_count u32 = 24 llama_model_loader: - kv 7: llama.context_length u32 = 8192 llama_model_loader: - kv 8: llama.embedding_length u32 = 2048 llama_model_loader: - kv 9: llama.feed_forward_length u32 = 5440 llama_model_loader: - kv 10: llama.attention.head_count u32 = 16 llama_model_loader: - kv 11: llama.attention.head_count_kv u32 = 16 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 13: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 14: general.file_type u32 = 32 llama_model_loader: - kv 15: llama.vocab_size u32 = 256000 llama_model_loader: - kv 16: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 17: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 18: tokenizer.ggml.model str = llama llama_model_loader: - kv 19: tokenizer.ggml.pre str = default llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,256000] = ["<unk>", "<s>", "</s>", "<pad>", "<|... llama_model_loader: - kv 21: tokenizer.ggml.scores arr[f32,256000] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 49 tensors llama_model_loader: - type bf16: 170 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 104 llm_load_vocab: token to piece cache size = 1.8842 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 256000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 24 llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 16 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 2048 llm_load_print_meta: n_embd_v_gqa = 2048 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 5440 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = BF16 llm_load_print_meta: model params = 2.25 B llm_load_print_meta: model size = 4.20 GiB (16.00 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 145 '<0x0A>' llm_load_print_meta: EOT token = 5 '<|im_end|>' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: EOG token = 5 '<|im_end|>' llm_load_print_meta: max token length = 72 llm_load_tensors: ggml ctx size = 0.20 MiB llm_load_tensors: offloading 24 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 25/25 layers to GPU llm_load_tensors: Metal buffer size = 4298.39 MiB llm_load_tensors: CPU buffer size = 1000.00 MiB ....................................................... llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M3 Max ggml_metal_init: picking default device: Apple M3 Max ggml_metal_init: using embedded metal library ggml_metal_init: GPU name: Apple M3 Max ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 42949.67 MB llama_kv_cache_init: Metal KV buffer size = 1536.00 MiB llama_new_context_with_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB llama_new_context_with_model: CPU output buffer size = 0.98 MiB llama_new_context_with_model: Metal compute buffer size = 288.00 MiB llama_new_context_with_model: CPU compute buffer size = 500.00 MiB llama_new_context_with_model: graph nodes = 774 llama_new_context_with_model: graph splits = 339 llama_init_from_gpt_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) main: llama threadpool init, n_threads = 12 system_info: n_threads = 12 (n_threads_batch = 12) / 16 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 1 | LLAMAFILE = 1 | sampler seed: 892523417 sampler params: repeat_last_n = 64, repeat_penalty = 1.200, frequency_penalty = 0.000, presence_penalty = 0.000 top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.100 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 sampler chain: logits -> logit-bias -> penalties -> top-k -> tail-free -> typical -> top-p -> min-p -> temp-ext -> softmax -> dist generate: n_ctx = 8192, n_batch = 2048, n_predict = 128, n_keep = 1 hola, puedes decirme por qué el sol es amarillo? ¿Por qué la luz del sol se ve más clara en verano que en invierno? La respuesta a esta pregunta está relacionada con las propiedades de los colores. Los colores son formas diferentes de energía electromagnética y tienen una longitud de onda diferente. En general, cuanto mayor sea el número de vibraciones por segundo (velocidad), menor será la frecuencia del color. Por ejemplo, si se mide en Hertz (Hz) o ciclos por segundo, un color rojo tiene 160 Hz mientras que uno azul tiene solo 435 Hz. La luz visible es una combinación de colores compu llama_perf_sampler_print: sampling time = 132.98 ms / 143 runs ( 0.93 ms per token, 1075.34 tokens per second) llama_perf_context_print: load time = 685.66 ms llama_perf_context_print: prompt eval time = 1167.94 ms / 15 tokens ( 77.86 ms per token, 12.84 tokens per second) llama_perf_context_print: eval time = 17103.92 ms / 127 runs ( 134.68 ms per token, 7.43 tokens per second) llama_perf_context_print: total time = 18423.56 ms / 142 tokens ggml_metal_free: deallocating ``` but perhaps when I wrote it, I did something wrong in my model file? ``` # Ollama Modelfile for Salamandra 2B IQ4_NL FROM ./salamandra-2b_Q8_0.gguf # Model Parameters PARAMETER num_ctx 8192 PARAMETER rope_freq_base 10000.0 PARAMETER top_p 0.95 PARAMETER repeat_penalty 1.2 # System Prompt SYSTEM """You are a multilingual assistant capable of understanding and responding in multiple languages. Adapt your responses to match the user's input language while providing clear, accurate, and concise information.""" # Template TEMPLATE """{{ if .System }}<|im_start|>system {{ .System }}<|im_end|>{{ end }}{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|>{{ end }}<|im_start|>assistant""" # License LICENSE """ Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2024 Language Technologies Unit, Barcelona Supercomputing Center Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.""" ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.13
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7188/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/842
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/842/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/842/comments
https://api.github.com/repos/ollama/ollama/issues/842/events
https://github.com/ollama/ollama/pull/842
1,950,472,851
PR_kwDOJ0Z1Ps5dLiTN
842
#790 improve readme
{ "login": "jerzydziewierz", "id": 1606347, "node_id": "MDQ6VXNlcjE2MDYzNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/1606347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerzydziewierz", "html_url": "https://github.com/jerzydziewierz", "followers_url": "https://api.github.com/users/jerzydziewierz/followers", "following_url": "https://api.github.com/users/jerzydziewierz/following{/other_user}", "gists_url": "https://api.github.com/users/jerzydziewierz/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerzydziewierz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerzydziewierz/subscriptions", "organizations_url": "https://api.github.com/users/jerzydziewierz/orgs", "repos_url": "https://api.github.com/users/jerzydziewierz/repos", "events_url": "https://api.github.com/users/jerzydziewierz/events{/privacy}", "received_events_url": "https://api.github.com/users/jerzydziewierz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-10-18T19:18:44
2023-11-29T21:30:02
2023-11-29T21:30:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/842", "html_url": "https://github.com/ollama/ollama/pull/842", "diff_url": "https://github.com/ollama/ollama/pull/842.diff", "patch_url": "https://github.com/ollama/ollama/pull/842.patch", "merged_at": null }
As promised, an updated README that explains how to force lower memory usage.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/842/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3499
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3499/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3499/comments
https://api.github.com/repos/ollama/ollama/issues/3499/events
https://github.com/ollama/ollama/issues/3499
2,227,241,505
I_kwDOJ0Z1Ps6EwQIh
3,499
OLLAMA_INITIAL_MODEL for use with OLLAMA_KEEP_ALLIVE=-1
{ "login": "BananaAcid", "id": 1894723, "node_id": "MDQ6VXNlcjE4OTQ3MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/1894723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BananaAcid", "html_url": "https://github.com/BananaAcid", "followers_url": "https://api.github.com/users/BananaAcid/followers", "following_url": "https://api.github.com/users/BananaAcid/following{/other_user}", "gists_url": "https://api.github.com/users/BananaAcid/gists{/gist_id}", "starred_url": "https://api.github.com/users/BananaAcid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BananaAcid/subscriptions", "organizations_url": "https://api.github.com/users/BananaAcid/orgs", "repos_url": "https://api.github.com/users/BananaAcid/repos", "events_url": "https://api.github.com/users/BananaAcid/events{/privacy}", "received_events_url": "https://api.github.com/users/BananaAcid/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
4
2024-04-05T07:11:36
2024-05-15T00:34:47
2024-05-15T00:34:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? It would be nice, to be able to initially load a model using an env like OLLAMA_INITIAL_MODEL in conjunction with the keep_alive=-1 option, to have OLLAMA start up and be ready to go on slow systems (as on a mining rig with usb2-raiser connected RTXs) ### How should we solve this? _No response_ ### What is the impact of not solving this? _No response_ ### Anything else? _No response_
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3499/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3499/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4999
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4999/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4999/comments
https://api.github.com/repos/ollama/ollama/issues/4999/events
https://github.com/ollama/ollama/issues/4999
2,348,430,675
I_kwDOJ0Z1Ps6L-jVT
4,999
Error: Head "http://127.0.0.1:11434/": EOF
{ "login": "HyperUpscale", "id": 126105457, "node_id": "U_kgDOB4Q3cQ", "avatar_url": "https://avatars.githubusercontent.com/u/126105457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HyperUpscale", "html_url": "https://github.com/HyperUpscale", "followers_url": "https://api.github.com/users/HyperUpscale/followers", "following_url": "https://api.github.com/users/HyperUpscale/following{/other_user}", "gists_url": "https://api.github.com/users/HyperUpscale/gists{/gist_id}", "starred_url": "https://api.github.com/users/HyperUpscale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HyperUpscale/subscriptions", "organizations_url": "https://api.github.com/users/HyperUpscale/orgs", "repos_url": "https://api.github.com/users/HyperUpscale/repos", "events_url": "https://api.github.com/users/HyperUpscale/events{/privacy}", "received_events_url": "https://api.github.com/users/HyperUpscale/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-06-12T10:40:36
2024-06-12T12:19:26
2024-06-12T12:19:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? C:\Users\win>ollama list Error: Head "http://127.0.0.1:11434/": EOF C:\Users\win>ollama -v Warning: could not connect to a running Ollama instance Warning: client version is 0.1.43 C:\Users\win>ollama serve Error: listen tcp 127.0.0.1:11434: bind: An attempt was made to access a socket in a way forbidden by its access permissions. app.log: time=2024-06-12T18:35:12.068+08:00 level=INFO source=server.go:135 msg="starting server..." time=2024-06-12T18:35:12.069+08:00 level=INFO source=server.go:121 msg="started ollama server with pid 15024" time=2024-06-12T18:35:12.069+08:00 level=INFO source=server.go:123 msg="ollama server logs C:\\Users\\win\\AppData\\Local\\Ollama\\server.log" time=2024-06-12T18:35:12.116+08:00 level=WARN source=server.go:157 msg="server crash 21 - exit code 1 - respawning" config.json: {"id":"a73328f3-b63f-49a0-af7c-1f37f65ff305","first-time-run":true} MAYBE I AM DOING SOMETHING WRONG... but 0.1.42 was working... with the same settings ![image](https://github.com/ollama/ollama/assets/126105457/23f134be-8adf-482b-bf15-c683ad1abafc) ### OS Windows ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.1.43
{ "login": "HyperUpscale", "id": 126105457, "node_id": "U_kgDOB4Q3cQ", "avatar_url": "https://avatars.githubusercontent.com/u/126105457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HyperUpscale", "html_url": "https://github.com/HyperUpscale", "followers_url": "https://api.github.com/users/HyperUpscale/followers", "following_url": "https://api.github.com/users/HyperUpscale/following{/other_user}", "gists_url": "https://api.github.com/users/HyperUpscale/gists{/gist_id}", "starred_url": "https://api.github.com/users/HyperUpscale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HyperUpscale/subscriptions", "organizations_url": "https://api.github.com/users/HyperUpscale/orgs", "repos_url": "https://api.github.com/users/HyperUpscale/repos", "events_url": "https://api.github.com/users/HyperUpscale/events{/privacy}", "received_events_url": "https://api.github.com/users/HyperUpscale/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4999/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3246
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3246/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3246/comments
https://api.github.com/repos/ollama/ollama/issues/3246/events
https://github.com/ollama/ollama/issues/3246
2,194,941,966
I_kwDOJ0Z1Ps6C1CgO
3,246
Error: invalid file magic when importing Safetensors models
{ "login": "amnweb", "id": 16545063, "node_id": "MDQ6VXNlcjE2NTQ1MDYz", "avatar_url": "https://avatars.githubusercontent.com/u/16545063?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amnweb", "html_url": "https://github.com/amnweb", "followers_url": "https://api.github.com/users/amnweb/followers", "following_url": "https://api.github.com/users/amnweb/following{/other_user}", "gists_url": "https://api.github.com/users/amnweb/gists{/gist_id}", "starred_url": "https://api.github.com/users/amnweb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amnweb/subscriptions", "organizations_url": "https://api.github.com/users/amnweb/orgs", "repos_url": "https://api.github.com/users/amnweb/repos", "events_url": "https://api.github.com/users/amnweb/events{/privacy}", "received_events_url": "https://api.github.com/users/amnweb/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
11
2024-03-19T13:13:12
2024-06-14T07:20:55
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? > ollama create test -f Modelfile transferring model data creating model layer Error: invalid file magic This happens for all the **Safetensors** models I try to import. Modelfile content `FROM ./model.safetensors` ![Screenshot 2024-03-19 141058](https://github.com/ollama/ollama/assets/16545063/a807e4f9-dfee-4ff4-bc10-cec44167bf9f) ### What did you expect to see? expect to working :) ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS Windows ### Architecture amd64 ### Platform _No response_ ### Ollama version 0.1.29 ### GPU Nvidia ### GPU info _No response_ ### CPU Intel ### Other software _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3246/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6015
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6015/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6015/comments
https://api.github.com/repos/ollama/ollama/issues/6015/events
https://github.com/ollama/ollama/issues/6015
2,433,425,353
I_kwDOJ0Z1Ps6RCx_J
6,015
Able to write the prompt while the model is loading in the background
{ "login": "echo-saurav", "id": 76121100, "node_id": "MDQ6VXNlcjc2MTIxMTAw", "avatar_url": "https://avatars.githubusercontent.com/u/76121100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/echo-saurav", "html_url": "https://github.com/echo-saurav", "followers_url": "https://api.github.com/users/echo-saurav/followers", "following_url": "https://api.github.com/users/echo-saurav/following{/other_user}", "gists_url": "https://api.github.com/users/echo-saurav/gists{/gist_id}", "starred_url": "https://api.github.com/users/echo-saurav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/echo-saurav/subscriptions", "organizations_url": "https://api.github.com/users/echo-saurav/orgs", "repos_url": "https://api.github.com/users/echo-saurav/repos", "events_url": "https://api.github.com/users/echo-saurav/events{/privacy}", "received_events_url": "https://api.github.com/users/echo-saurav/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-07-27T10:59:07
2024-07-27T10:59:07
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
So in the cli we need to wait for the model to load to write prompt , it doesn't take long but we can spend the loading time to write the prompt
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6015/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1996
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1996/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1996/comments
https://api.github.com/repos/ollama/ollama/issues/1996/events
https://github.com/ollama/ollama/issues/1996
2,080,894,713
I_kwDOJ0Z1Ps58B-75
1,996
Ollama quits when attempting to run anything.
{ "login": "Maxwelldoug", "id": 29025327, "node_id": "MDQ6VXNlcjI5MDI1MzI3", "avatar_url": "https://avatars.githubusercontent.com/u/29025327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Maxwelldoug", "html_url": "https://github.com/Maxwelldoug", "followers_url": "https://api.github.com/users/Maxwelldoug/followers", "following_url": "https://api.github.com/users/Maxwelldoug/following{/other_user}", "gists_url": "https://api.github.com/users/Maxwelldoug/gists{/gist_id}", "starred_url": "https://api.github.com/users/Maxwelldoug/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Maxwelldoug/subscriptions", "organizations_url": "https://api.github.com/users/Maxwelldoug/orgs", "repos_url": "https://api.github.com/users/Maxwelldoug/repos", "events_url": "https://api.github.com/users/Maxwelldoug/events{/privacy}", "received_events_url": "https://api.github.com/users/Maxwelldoug/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
4
2024-01-14T20:56:38
2024-01-26T21:33:22
2024-01-26T21:33:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
You folks don't have any templates in place, so I apologize in advance. I've got a server that I recently deployed (non docker) ollama to, and I kept getting empty responses whenever I tried to run something. upon further investigation of the systemd service, it's exiting with status 2. Here's the last few hundred lines of journalctl: ``` Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.cgocall(0x9c1470, 0xc00013c6a0) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc00013c678 sp=>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/llm._Cfunc_dynamic_shim_llama_server_init({0x7>Jan 14 20:38:49 tyrannosaurus ollama[39798]: _cgo_gotypes.go:287 +0x45 fp=0xc00013c6a0 sp=0xc00013c678 pc=0x7cd>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init.func1(0>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0xe>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init(0xc0000>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0x1>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/llm.newExtServer({0x17842518, 0xc0004667e0}, {>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/llm/ext_server_common.go:146 +>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/llm.newDynamicShimExtServer({0xc00071c000, 0x2>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:93 +0x5>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/llm.newLlmServer({0xc3d801, 0x4}, {0xc00012815>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/llm/llm.go:86 +0x16b fp=0xc000>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/llm.New({0xc0004aa180?, 0x0?}, {0xc000128150, >Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/llm/llm.go:76 +0x233 fp=0xc000>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/server.load(0xc000002000?, 0xc000002000, {{0x0>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/server/routes.go:84 +0x425 fp=>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/server.ChatHandler(0xc000486600) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/server/routes.go:1057 +0x828 f>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/gin-gonic/gin.(*Context).Next(...) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc00048>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/server/routes.go:876 +0x68 fp=>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/gin-gonic/gin.(*Context).Next(...) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc000486600) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 +>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/gin-gonic/gin.(*Context).Next(...) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc000486600) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 +0x>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/gin-gonic/gin.(*Context).Next(...) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc0000ebba0, 0xc0004>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 +0x65b>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc0000ebba0, {0x1783c860?, 0>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 +0x1dd>Jan 14 20:38:49 tyrannosaurus ollama[39798]: net/http.serverHandler.ServeHTTP({0x1783ab80?}, {0x1783c860?, 0xc00044e2a0>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/net/http/server.go:2938 +0x8e fp=0xc00013db78 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: net/http.(*conn).serve(0xc0000fe240, {0x1783ded8, 0xc000718240}) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/net/http/server.go:2009 +0x5f4 fp=0xc00013dfb8 s>Jan 14 20:38:49 tyrannosaurus ollama[39798]: net/http.(*Server).Serve.func3() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc00013dfe0 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00013dfe8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by net/http.(*Server).Serve in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/net/http/server.go:3086 +0x5cb Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 1 [IO wait]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x4a05b0?, 0xc00053b828?, 0x78?, 0xb8?, 0x5166dd?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0005af808 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.netpollblock(0x48b9d2?, 0x428946?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0005af840 sp=>Jan 14 20:38:49 tyrannosaurus ollama[39798]: internal/poll.runtime_pollWait(0x7fa3240b9e80, 0x72) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0005af860 sp=>Jan 14 20:38:49 tyrannosaurus ollama[39798]: internal/poll.(*pollDesc).wait(0xc000488000?, 0x4?, 0x0) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: internal/poll.(*pollDesc).waitRead(...) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 Jan 14 20:38:49 tyrannosaurus ollama[39798]: internal/poll.(*FD).Accept(0xc000488000) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac fp=0xc0005af>Jan 14 20:38:49 tyrannosaurus ollama[39798]: net.(*netFD).accept(0xc000488000) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/net/fd_unix.go:172 +0x29 fp=0xc0005af9e8 sp=0xc0>Jan 14 20:38:49 tyrannosaurus ollama[39798]: net.(*TCPListener).accept(0xc0004595a0) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/net/tcpsock_posix.go:152 +0x1e fp=0xc0005afa10 s>Jan 14 20:38:49 tyrannosaurus ollama[39798]: net.(*TCPListener).Accept(0xc0004595a0) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/net/tcpsock.go:315 +0x30 fp=0xc0005afa40 sp=0xc0>Jan 14 20:38:49 tyrannosaurus ollama[39798]: net/http.(*onceCloseListener).Accept(0xc0000fe240?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: <autogenerated>:1 +0x24 fp=0xc0005afa58 sp=0xc0005afa40 pc=0x711184Jan 14 20:38:49 tyrannosaurus ollama[39798]: net/http.(*Server).Serve(0xc000398ff0, {0x1783c650, 0xc0004595a0}) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/net/http/server.go:3056 +0x364 fp=0xc0005afb88 s>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/server.Serve({0x1783c650, 0xc0004595a0}) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/server/routes.go:956 +0x389 fp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/cmd.RunServer(0xc000486300?, {0x17d9db40?, 0x4>Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/cmd/cmd.go:634 +0x199 fp=0xc00>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/spf13/cobra.(*Command).execute(0xc00041b800, {0x17d9db40, 0x0, >Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x8>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/spf13/cobra.(*Command).ExecuteC(0xc00041ac00) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/spf13/cobra.(*Command).Execute(...) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/spf13/cobra.(*Command).ExecuteContext(...) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /root/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 Jan 14 20:38:49 tyrannosaurus ollama[39798]: main.main() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/main.go:11 +0x4d fp=0xc0005aff>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.main() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:267 +0x2bb fp=0xc0005affe0 sp=0x>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005affe8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 2 [force gc (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00006efa8 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goparkunlock(...) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:404 Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.forcegchelper() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:322 +0xb3 fp=0xc00006efe0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006efe8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.init.6 in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:310 +0x1a Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:200 +0x25 fp=0xc00006f7e0 sp=0xc0>Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 5 [finalizer wait]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0xc364c0?, 0x10045f001?, 0x0?, 0x0?, 0x466045?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00006e628 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.runfinq() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc00006e7e0 sp=>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006e7e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.createfing in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mfinal.go:163 +0x3d Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 6 [select, locked to thread]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0xc0000707a8?, 0x2?, 0x29?, 0xe1?, 0xc0000707a4?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000070638 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.selectgo(0xc0000707a8, 0xc0000707a0, 0x0?, 0x0, 0x0?, 0x1) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc000070758 sp=>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.ensureSigM.func1() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/signal_unix.go:1014 +0x19f fp=0xc0000707>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000707e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.ensureSigM in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/signal_unix.go:997 +0xc8 Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 18 [syscall]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.notetsleepg(0x0?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/lock_futex.go:236 +0x29 fp=0xc00006a7a0 >Jan 14 20:38:49 tyrannosaurus ollama[39798]: os/signal.signal_recv() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/sigqueue.go:152 +0x29 fp=0xc00006a7c0 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: os/signal.loop() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/os/signal/signal_unix.go:23 +0x13 fp=0xc00006a7e>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006a7e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by os/signal.Notify.func1.1 in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/os/signal/signal.go:151 +0x1f Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 7 [chan receive]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000070f18 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.chanrecv(0xc0001a9a40, 0x0, 0x1) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/chan.go:583 +0x3cd fp=0xc000070f90 sp=0x>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.chanrecv1(0x0?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/chan.go:442 +0x12 fp=0xc000070fb8 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: github.com/jmorganca/ollama/server.Serve.func1() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/server/routes.go:938 +0x25 fp=>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000070fe8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by github.com/jmorganca/ollama/server.Serve in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /go/src/github.com/jmorganca/ollama/server/routes.go:937 +0x285 Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 8 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000071750 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0000717e0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000717e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 34 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x19dd1f8e3a4?, 0x3?, 0xa9?, 0x5f?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000588750 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005887e0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005887e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 9 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x19dd1f8e426?, 0xc0004627a0?, 0x1a?, 0x14?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000071f50 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000071fe0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000071fe8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 10 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x19dd1f80822?, 0x3?, 0x6a?, 0x2f?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000584750 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005847e0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005847e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 11 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x17d9f7a0?, 0x1?, 0xad?, 0x34?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000584f50 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000584fe0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000584fe8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 12 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x19dd1f8e61a?, 0x3?, 0x9f?, 0x27?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000585750 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005857e0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005857e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 35 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x19dd1f804a2?, 0x3?, 0xef?, 0x89?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000588f50 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000588fe0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000588fe8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 50 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x19dd1f928ed?, 0x3?, 0xf?, 0xfb?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000516750 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005167e0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005167e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 36 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x19dd1f8e6f3?, 0x1?, 0xbc?, 0xe8?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000589750 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005897e0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005897e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 51 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x19dd1f9f31b?, 0x1?, 0x11?, 0x70?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000516f50 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000516fe0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000516fe8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 37 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x19dd1f8e74a?, 0x3?, 0x82?, 0x0?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000589f50 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc000589fe0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000589fe8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 52 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x19dd1f8ea5c?, 0x1?, 0x4b?, 0x81?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000517750 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc0005177e0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005177e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1217 +0x1c Jan 14 20:38:49 tyrannosaurus ollama[39798]: goroutine 38 [GC worker (idle)]: Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gopark(0x17d9f7a0?, 0x3?, 0x50?, 0xf8?, 0x0?) Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00058a750 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.gcBgMarkWorker() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/mgc.go:1293 +0xe5 fp=0xc00058a7e0 sp=0xc>Jan 14 20:38:49 tyrannosaurus ollama[39798]: runtime.goexit() Jan 14 20:38:49 tyrannosaurus ollama[39798]: /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00058a7e8 sp>Jan 14 20:38:49 tyrannosaurus ollama[39798]: created by runtime.gcBgMarkStartWorkers in goroutine 1 Jan 14 20:38:49 tyrannosaurus ollama[39798]: rbp 0x9c3c Jan 14 20:38:49 tyrannosaurus ollama[39798]: rsp 0x7fa2d6ffc0e0 Jan 14 20:38:49 tyrannosaurus ollama[39798]: r8 0x7fa2d6ffc1b0 Jan 14 20:38:49 tyrannosaurus ollama[39798]: r9 0x7fa2d6ffc150 Jan 14 20:38:49 tyrannosaurus ollama[39798]: r10 0x8 Jan 14 20:38:49 tyrannosaurus ollama[39798]: r11 0x246 Jan 14 20:38:49 tyrannosaurus ollama[39798]: r12 0x6 Jan 14 20:38:49 tyrannosaurus ollama[39798]: r13 0x16 Jan 14 20:38:49 tyrannosaurus ollama[39798]: r14 0x1b01560400 Jan 14 20:38:49 tyrannosaurus ollama[39798]: r15 0x1bbd588020 Jan 14 20:38:49 tyrannosaurus ollama[39798]: rip 0x7fa36d5699fc Jan 14 20:38:49 tyrannosaurus ollama[39798]: rflags 0x246 Jan 14 20:38:49 tyrannosaurus ollama[39798]: cs 0x33 Jan 14 20:38:49 tyrannosaurus ollama[39798]: fs 0x0 Jan 14 20:38:49 tyrannosaurus ollama[39798]: gs 0x0 Jan 14 20:38:50 tyrannosaurus systemd[1]: ollama.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Jan 14 20:38:50 tyrannosaurus systemd[1]: ollama.service: Failed with result 'exit-code'. Jan 14 20:38:50 tyrannosaurus systemd[1]: ollama.service: Consumed 4.330s CPU time. Jan 14 20:38:53 tyrannosaurus systemd[1]: ollama.service: Scheduled restart job, restart counter is at 1. Jan 14 20:38:53 tyrannosaurus systemd[1]: Stopped Ollama Service. Jan 14 20:38:53 tyrannosaurus systemd[1]: ollama.service: Consumed 4.330s CPU time. Jan 14 20:38:53 tyrannosaurus systemd[1]: Started Ollama Service. Jan 14 20:38:53 tyrannosaurus ollama[40136]: 2024/01/14 20:38:53 images.go:834: total blobs: 25 Jan 14 20:38:53 tyrannosaurus ollama[40136]: 2024/01/14 20:38:53 images.go:841: total unused blobs removed: 0 Jan 14 20:38:53 tyrannosaurus ollama[40136]: 2024/01/14 20:38:53 routes.go:929: Listening on [::]:11434 (version 0.1.18)Jan 14 20:38:53 tyrannosaurus ollama[40136]: 2024/01/14 20:38:53 shim_ext_server.go:142: Dynamic LLM variants [cuda roc>Jan 14 20:38:53 tyrannosaurus ollama[40136]: 2024/01/14 20:38:53 gpu.go:34: Detecting GPU type Jan 14 20:38:53 tyrannosaurus ollama[40136]: 2024/01/14 20:38:53 gpu.go:53: Nvidia GPU detected ``` The server in question is running Ubuntu 22.04.3 LTS, with the following spec: Host: PowerEdge R730 Kernel: 5.15.0-91-generic CPU: Intel Xeon E5-2620 v3 (24) @ 2.600GHz GPU: NVIDIA GeForce GTX 745 Memory: 19597MiB / 96552MiB Let me know if anything else is needed or if this is a known issue.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1996/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2225
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2225/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2225/comments
https://api.github.com/repos/ollama/ollama/issues/2225/events
https://github.com/ollama/ollama/issues/2225
2,103,264,294
I_kwDOJ0Z1Ps59XUQm
2,225
Ollama stops generating output and fails to run models after a few minutes
{ "login": "TheStarAlight", "id": 105955974, "node_id": "U_kgDOBlDChg", "avatar_url": "https://avatars.githubusercontent.com/u/105955974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheStarAlight", "html_url": "https://github.com/TheStarAlight", "followers_url": "https://api.github.com/users/TheStarAlight/followers", "following_url": "https://api.github.com/users/TheStarAlight/following{/other_user}", "gists_url": "https://api.github.com/users/TheStarAlight/gists{/gist_id}", "starred_url": "https://api.github.com/users/TheStarAlight/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TheStarAlight/subscriptions", "organizations_url": "https://api.github.com/users/TheStarAlight/orgs", "repos_url": "https://api.github.com/users/TheStarAlight/repos", "events_url": "https://api.github.com/users/TheStarAlight/events{/privacy}", "received_events_url": "https://api.github.com/users/TheStarAlight/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
41
2024-01-27T06:22:10
2024-06-03T23:44:10
2024-04-15T19:09:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I'm running ollama on a Debian server and use the oterm as the interface. After some chats (just less than 10 normal questions) the ollama fails to respond anymore and running `ollama run mixtral` just didn't success (it keeps loading). I noted that the same issue happened, like in #1863 . Is there a solution at the moment? Also, I'm not the administrator of the server and I even don't know how to restart ollama 😂. The serve process seems to runs as another user named ollama. Can anyone tell me how to restart it? To developers: I can provide some debug information if you need, just tell me how to do it. Thanks :D
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2225/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2225/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7320
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7320/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7320/comments
https://api.github.com/repos/ollama/ollama/issues/7320/events
https://github.com/ollama/ollama/issues/7320
2,605,969,661
I_kwDOJ0Z1Ps6bU_D9
7,320
0.4.0 regression
{ "login": "skobkin", "id": 967576, "node_id": "MDQ6VXNlcjk2NzU3Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/967576?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skobkin", "html_url": "https://github.com/skobkin", "followers_url": "https://api.github.com/users/skobkin/followers", "following_url": "https://api.github.com/users/skobkin/following{/other_user}", "gists_url": "https://api.github.com/users/skobkin/gists{/gist_id}", "starred_url": "https://api.github.com/users/skobkin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skobkin/subscriptions", "organizations_url": "https://api.github.com/users/skobkin/orgs", "repos_url": "https://api.github.com/users/skobkin/repos", "events_url": "https://api.github.com/users/skobkin/events{/privacy}", "received_events_url": "https://api.github.com/users/skobkin/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA", "url": "https://api.github.com/repos/ollama/ollama/labels/amd", "name": "amd", "color": "000000", "default": false, "description": "Issues relating to AMD GPUs and ROCm" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
0
2024-10-22T16:44:36
2024-10-22T19:54:17
2024-10-22T19:54:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Just updated ollama to [`0.4.0-rc3-rocm`](https://hub.docker.com/layers/ollama/ollama/0.4.0-rc3/images/sha256-6b75f17d6160b28dec8d8d519ceec02dfdae20e1c2451db34f3a3351f5de373a?context=explore) to test new LLaMA 3.2 Vision capabilities. But it isn't working and returning 500 to OpenWebUI. It isn't working even with LLaMA 3.1 Lexi which I was using before update. ![image](https://github.com/user-attachments/assets/9c4a002e-98c7-47af-b2ef-2405da33c559) Here's `ollama` container logs when trying to chat with LLaMA 3.1 Lexi 8B Q6: ``` ollama | time=2024-10-22T16:33:28.879Z level=INFO source=images.go:754 msg="total blobs: 84" ollama | time=2024-10-22T16:33:28.880Z level=INFO source=images.go:761 msg="total unused blobs removed: 0" ollama | time=2024-10-22T16:33:28.880Z level=INFO source=routes.go:1217 msg="Listening on [::]:11434 (version 0.4.0-rc3)" ollama | time=2024-10-22T16:33:28.880Z level=INFO source=common.go:82 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm]" ollama | time=2024-10-22T16:33:28.880Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" ollama | time=2024-10-22T16:33:28.882Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" ollama | time=2024-10-22T16:33:28.884Z level=INFO source=amd_linux.go:383 msg="amdgpu is supported" gpu=0 gpu_type=gfx1101 ollama | time=2024-10-22T16:33:28.884Z level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB" ollama | time=2024-10-22T16:33:28.884Z level=INFO source=types.go:123 msg="inference compute" id=0 library=rocm variant="" compute=gfx1101 driver=0.0 name=1002:747e total="16.0 GiB" available="15.2 GiB" ollama | [GIN] 2024/10/22 - 16:33:39 | 200 | 1.70353ms | 172.24.0.3 | GET "/api/tags" ollama | [GIN] 2024/10/22 - 16:33:43 | 200 | 21.159µs | 172.24.0.1 | GET "/" ollama | time=2024-10-22T16:33:52.469Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-545ba086eb5179e8f97f9eb7d54c61555a7cd645c5b26f9551209022878abb2c gpu=0 parallel=4 available=16308183040 required="9.7 GiB" ollama | time=2024-10-22T16:33:52.469Z level=INFO source=llama-server.go:72 msg="system memory" total="30.5 GiB" free="24.8 GiB" free_swap="7.8 GiB" ollama | time=2024-10-22T16:33:52.469Z level=INFO source=memory.go:346 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[15.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.7 GiB" memory.required.partial="9.7 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[9.7 GiB]" memory.weights.total="7.9 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="532.3 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" ollama | time=2024-10-22T16:33:52.470Z level=INFO source=llama-server.go:355 msg="starting llama server" cmd="/usr/lib/ollama/runners/rocm/ollama_llama_server --model /root/.ollama/models/blobs/sha256-545ba086eb5179e8f97f9eb7d54c61555a7cd645c5b26f9551209022878abb2c --ctx-size 8192 --batch-size 512 --embedding --n-gpu-layers 33 --threads 12 --parallel 4 --port 34353" ollama | time=2024-10-22T16:33:52.470Z level=INFO source=sched.go:450 msg="loaded runners" count=1 ollama | time=2024-10-22T16:33:52.470Z level=INFO source=llama-server.go:534 msg="waiting for llama runner to start responding" ollama | time=2024-10-22T16:33:52.470Z level=INFO source=llama-server.go:568 msg="waiting for server to become available" status="llm server error" ollama | /usr/lib/ollama/runners/rocm/ollama_llama_server: error while loading shared libraries: libelf.so.1: cannot open shared object file: No such file or directory ollama | time=2024-10-22T16:33:52.720Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: exit status 127" ``` Just in case: I have one integrated GPU and one additional GPU in PCI-e slot. I called this regression because it didn't work even on previously perfectly functioning model, not only on LLaMA 3.2. It looks like it's similar to #7279, but I'm not sure, as the output isn't completely identical. ### OS Linux, Docker ### GPU AMD ### CPU AMD ### Ollama version 0.4.0-rc3
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7320/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5183
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5183/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5183/comments
https://api.github.com/repos/ollama/ollama/issues/5183/events
https://github.com/ollama/ollama/issues/5183
2,364,585,215
I_kwDOJ0Z1Ps6M8LT_
5,183
`ollama show` has quotes around stop words
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-06-20T14:20:54
2024-06-23T02:09:25
2024-06-23T02:09:25
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` % ollama show llama3 Model arch llama parameters 8.0B quantization Q4_0 context length 8192 embedding length 4096 Parameters stop "<|start_header_id|>" stop "<|end_header_id|>" stop "<|eot_id|>" num_keep 24 License META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.1.45+
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5183/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5333
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5333/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5333/comments
https://api.github.com/repos/ollama/ollama/issues/5333/events
https://github.com/ollama/ollama/pull/5333
2,378,631,008
PR_kwDOJ0Z1Ps5zzEuN
5,333
update readme for gemma 2
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-06-27T16:43:43
2024-06-27T16:45:18
2024-06-27T16:45:16
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5333", "html_url": "https://github.com/ollama/ollama/pull/5333", "diff_url": "https://github.com/ollama/ollama/pull/5333.diff", "patch_url": "https://github.com/ollama/ollama/pull/5333.patch", "merged_at": "2024-06-27T16:45:16" }
null
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5333/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2534
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2534/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2534/comments
https://api.github.com/repos/ollama/ollama/issues/2534/events
https://github.com/ollama/ollama/issues/2534
2,137,822,432
I_kwDOJ0Z1Ps5_bJTg
2,534
Packaging issues with vendored llama.cpp
{ "login": "viraptor", "id": 188063, "node_id": "MDQ6VXNlcjE4ODA2Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/188063?v=4", "gravatar_id": "", "url": "https://api.github.com/users/viraptor", "html_url": "https://github.com/viraptor", "followers_url": "https://api.github.com/users/viraptor/followers", "following_url": "https://api.github.com/users/viraptor/following{/other_user}", "gists_url": "https://api.github.com/users/viraptor/gists{/gist_id}", "starred_url": "https://api.github.com/users/viraptor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/viraptor/subscriptions", "organizations_url": "https://api.github.com/users/viraptor/orgs", "repos_url": "https://api.github.com/users/viraptor/repos", "events_url": "https://api.github.com/users/viraptor/events{/privacy}", "received_events_url": "https://api.github.com/users/viraptor/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-02-16T03:51:12
2024-10-17T22:03:10
2024-10-17T22:03:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I'm trying to package the new version (after llama.cpp has been vendored) for nixpkgs and I'm running into issues. Essentially, ollama tries to be very clever and generic with the build, but this goes counter to what the systems which provide the packaged ollama and llama.cpp will try to achieve. Since we already have the llama.cpp packages ready with all the the complicated cuda/rocm/apple dependencies and flags in order, it's extra unnecessary work to replicate all of that for ollama as well. While I'm trying to find a good way to un-vendor and use the existing library (with your provided patches), it's getting problematic. Your custom distribution works for you, but I'd love to be able to just build one version with specific config, referencing an existing llama.cpp. Have you considered upstreaming your changes to llama.cpp? My happy path as a packager would be: ollama depends on llama.cpp, optionally requiring an environment variable to point at a specific shared library. There are also minor issues in multiple places, like: - both cmake and compiler being used directly instead of having a complete cmake build [https://github.com/ollama/ollama/blob/a468ae045971d009b782b259d21869f2767269fa/llm/generate/gen_common.sh#L87](here) - g++ being used instead of `$CXX` which breaks builds on some systems [https://github.com/ollama/ollama/blob/a468ae045971d009b782b259d21869f2767269fa/llm/generate/gen_common.sh#L89](here) Getting all the required functions back into llama.cpp, or at least providing everything as a drop-in folder that can be placed in llama.cpp/examples (so no complex build-time modifications/generation is done in ollama) would be a great improvement. It will probably also save you some headaches in the future when you update llama.cpp.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2534/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2534/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/65
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/65/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/65/comments
https://api.github.com/repos/ollama/ollama/issues/65/events
https://github.com/ollama/ollama/pull/65
1,797,545,560
PR_kwDOJ0Z1Ps5VILe1
65
call llama.cpp directly from go
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-07-10T20:36:56
2023-07-11T21:02:07
2023-07-11T19:01:03
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/65", "html_url": "https://github.com/ollama/ollama/pull/65", "diff_url": "https://github.com/ollama/ollama/pull/65.diff", "patch_url": "https://github.com/ollama/ollama/pull/65.patch", "merged_at": "2023-07-11T19:01:03" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/65/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/65/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5641
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5641/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5641/comments
https://api.github.com/repos/ollama/ollama/issues/5641/events
https://github.com/ollama/ollama/issues/5641
2,404,358,129
I_kwDOJ0Z1Ps6PT5fx
5,641
Ollama Puts out Gibberish After a While.
{ "login": "chigkim", "id": 22120994, "node_id": "MDQ6VXNlcjIyMTIwOTk0", "avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chigkim", "html_url": "https://github.com/chigkim", "followers_url": "https://api.github.com/users/chigkim/followers", "following_url": "https://api.github.com/users/chigkim/following{/other_user}", "gists_url": "https://api.github.com/users/chigkim/gists{/gist_id}", "starred_url": "https://api.github.com/users/chigkim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chigkim/subscriptions", "organizations_url": "https://api.github.com/users/chigkim/orgs", "repos_url": "https://api.github.com/users/chigkim/repos", "events_url": "https://api.github.com/users/chigkim/events{/privacy}", "received_events_url": "https://api.github.com/users/chigkim/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-07-11T23:31:36
2024-10-24T02:50:20
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I run the MMLU Pro benchmark on phi3 or deepseek-coder-v2 with [this script](https://github.com/chigkim/Ollama-MMLU-Pro/) that uses OpenAI compatible API, it runs for a while. Then, all of sudden, it starts to output: deepseek-coder-v2:16b-lite-instruct-q8_0 `@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@` Phi3:3.8b-mini-128k-instruct-q8_0 `<unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk>` The entire response contains nothing else but those characters. Once it happens once, it outputs the same response for every question until the end. I have the environment variable: `export OLLAMA_NUM_PARALLEL=4`, and I'm running the script with --parallel 4 option. According to token usage Ollama returns, the prompt for each question never goes above 2048 tokens. So far, I saw this happens on my mac with M3 max 64gb as well as Runpod instances with rtx-3090 and rtx-4090. This is going to be a hard bug to track, because it only happens sometimes, and you have to run it for a while before it happens. Does anyone have any suspicion on what might cause this? ### OS Linux, macOS ### GPU Nvidia, Apple ### CPU AMD, Apple ### Ollama version 0.1.48, 0.2.1
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5641/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5641/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3592
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3592/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3592/comments
https://api.github.com/repos/ollama/ollama/issues/3592/events
https://github.com/ollama/ollama/issues/3592
2,237,568,024
I_kwDOJ0Z1Ps6FXpQY
3,592
Long context like 32000 with command-r produces gibberish with random characters.
{ "login": "chigkim", "id": 22120994, "node_id": "MDQ6VXNlcjIyMTIwOTk0", "avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chigkim", "html_url": "https://github.com/chigkim", "followers_url": "https://api.github.com/users/chigkim/followers", "following_url": "https://api.github.com/users/chigkim/following{/other_user}", "gists_url": "https://api.github.com/users/chigkim/gists{/gist_id}", "starred_url": "https://api.github.com/users/chigkim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chigkim/subscriptions", "organizations_url": "https://api.github.com/users/chigkim/orgs", "repos_url": "https://api.github.com/users/chigkim/repos", "events_url": "https://api.github.com/users/chigkim/events{/privacy}", "received_events_url": "https://api.github.com/users/chigkim/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-04-11T12:03:14
2024-04-19T15:41:03
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? It responds in random characters. <5+a\j`=7Dc'_2^@&Til$g#*�wR0;��)3�ey�un��J���fd6�]{���-S����t�Z���:�x�"b|BI�jmĶ7��T'V?4_k^z0NU+=��i� ### What did you expect to see? Response in English. ### Steps to reproduce Initiate chat with command-r via api with num_ctx > 25000. ### Are there any recent changes that introduced the issue? 0.3.2-rc1 ### OS macOS ### Architecture arm64 ### Platform _No response_ ### Ollama version 0.3.2-rc1 ### GPU Apple ### GPU info m3-max ### CPU Apple ### Other software _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3592/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/788
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/788/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/788/comments
https://api.github.com/repos/ollama/ollama/issues/788/events
https://github.com/ollama/ollama/issues/788
1,942,855,896
I_kwDOJ0Z1Ps5zzaDY
788
i got this issue from orca-mini 7b
{ "login": "Boluex", "id": 90112749, "node_id": "MDQ6VXNlcjkwMTEyNzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/90112749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Boluex", "html_url": "https://github.com/Boluex", "followers_url": "https://api.github.com/users/Boluex/followers", "following_url": "https://api.github.com/users/Boluex/following{/other_user}", "gists_url": "https://api.github.com/users/Boluex/gists{/gist_id}", "starred_url": "https://api.github.com/users/Boluex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Boluex/subscriptions", "organizations_url": "https://api.github.com/users/Boluex/orgs", "repos_url": "https://api.github.com/users/Boluex/repos", "events_url": "https://api.github.com/users/Boluex/events{/privacy}", "received_events_url": "https://api.github.com/users/Boluex/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
36
2023-10-14T01:32:07
2024-07-26T15:20:45
2023-10-31T17:29:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
i am using an 8gb of RAM cpu system.....No vram...Downloaded the orca-mini 7b model on ollama.....but got this error....... Error: llama runner process has terminated.....How can i fix this?...please guys
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/788/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7518
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7518/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7518/comments
https://api.github.com/repos/ollama/ollama/issues/7518/events
https://github.com/ollama/ollama/issues/7518
2,636,634,653
I_kwDOJ0Z1Ps6dJ9od
7,518
Support for # of completions? (for loom obsidian plugin)
{ "login": "cognitivetech", "id": 55156785, "node_id": "MDQ6VXNlcjU1MTU2Nzg1", "avatar_url": "https://avatars.githubusercontent.com/u/55156785?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cognitivetech", "html_url": "https://github.com/cognitivetech", "followers_url": "https://api.github.com/users/cognitivetech/followers", "following_url": "https://api.github.com/users/cognitivetech/following{/other_user}", "gists_url": "https://api.github.com/users/cognitivetech/gists{/gist_id}", "starred_url": "https://api.github.com/users/cognitivetech/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cognitivetech/subscriptions", "organizations_url": "https://api.github.com/users/cognitivetech/orgs", "repos_url": "https://api.github.com/users/cognitivetech/repos", "events_url": "https://api.github.com/users/cognitivetech/events{/privacy}", "received_events_url": "https://api.github.com/users/cognitivetech/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-11-05T22:46:12
2024-11-05T22:46:12
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm trying to adapt the loom obsidian plugin to use ollama. now it seems to work fine, except I only ever get 1 completion. where settings.n is the number of completions I would like to generate. https://github.com/cosmicoptima/loom/blob/master/main.ts ```javascript async completeOpenAICompat(prompt: string) { prompt = this.trimOpenAIPrompt(prompt); // @ts-expect-error TODO let url = getPreset(this.settings).url; if (!(url.startsWith("http://") || url.startsWith("https://"))) url = "https://" + url; if (!url.endsWith("/")) url += "/"; url = url.replace(/v1\//, ""); url += "v1/completions"; let body: any = { prompt, model: getPreset(this.settings).model, max_tokens: this.settings.maxTokens, n: this.settings.n, temperature: this.settings.temperature, top_p: this.settings.topP, best_of: this.settings.bestOf > this.settings.n ? this.settings.bestOf : this.settings.n, }; ```
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7518/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8567
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8567/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8567/comments
https://api.github.com/repos/ollama/ollama/issues/8567/events
https://github.com/ollama/ollama/pull/8567
2,809,888,422
PR_kwDOJ0Z1Ps6I7BW-
8,567
build: support Compute Capability 5.0, 5.2 and 5.3 for CUDA 12.x
{ "login": "prusnak", "id": 42201, "node_id": "MDQ6VXNlcjQyMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/42201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prusnak", "html_url": "https://github.com/prusnak", "followers_url": "https://api.github.com/users/prusnak/followers", "following_url": "https://api.github.com/users/prusnak/following{/other_user}", "gists_url": "https://api.github.com/users/prusnak/gists{/gist_id}", "starred_url": "https://api.github.com/users/prusnak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prusnak/subscriptions", "organizations_url": "https://api.github.com/users/prusnak/orgs", "repos_url": "https://api.github.com/users/prusnak/repos", "events_url": "https://api.github.com/users/prusnak/events{/privacy}", "received_events_url": "https://api.github.com/users/prusnak/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2025-01-24T16:50:29
2025-01-29T17:19:03
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8567", "html_url": "https://github.com/ollama/ollama/pull/8567", "diff_url": "https://github.com/ollama/ollama/pull/8567.diff", "patch_url": "https://github.com/ollama/ollama/pull/8567.patch", "merged_at": null }
CUDA 12.x still supports Compute Capability 5.0, 5.2 and 5.3, so let's build for these architectures as well I have a GPU with CC 5.2 and confirmed that before the change ollama crashes, afterwards it works just fine. source: https://stackoverflow.com/questions/28932864/which-compute-capability-is-supported-by-which-cuda-versions
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8567/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8561
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8561/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8561/comments
https://api.github.com/repos/ollama/ollama/issues/8561/events
https://github.com/ollama/ollama/issues/8561
2,808,999,555
I_kwDOJ0Z1Ps6nbe6D
8,561
Use cases for using Ollama in Microsoft Word
{ "login": "GPTLocalhost", "id": 72584872, "node_id": "MDQ6VXNlcjcyNTg0ODcy", "avatar_url": "https://avatars.githubusercontent.com/u/72584872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GPTLocalhost", "html_url": "https://github.com/GPTLocalhost", "followers_url": "https://api.github.com/users/GPTLocalhost/followers", "following_url": "https://api.github.com/users/GPTLocalhost/following{/other_user}", "gists_url": "https://api.github.com/users/GPTLocalhost/gists{/gist_id}", "starred_url": "https://api.github.com/users/GPTLocalhost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GPTLocalhost/subscriptions", "organizations_url": "https://api.github.com/users/GPTLocalhost/orgs", "repos_url": "https://api.github.com/users/GPTLocalhost/repos", "events_url": "https://api.github.com/users/GPTLocalhost/events{/privacy}", "received_events_url": "https://api.github.com/users/GPTLocalhost/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-24T09:46:42
2025-01-24T09:46:42
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
If Microsoft Word users are a potential target audience for Ollama, what use cases would you expect? We recently released the following quick demo based on Ollama, and we are curious about what the next use case could be from this community's perspective. We’d greatly appreciate any advice. * [Use Ollama in Microsoft Word Locally](https://medium.com/@gptlocalhost/using-ollama-in-microsoft-word-locally-b713d65d11b0)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8561/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2726
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2726/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2726/comments
https://api.github.com/repos/ollama/ollama/issues/2726/events
https://github.com/ollama/ollama/issues/2726
2,152,232,668
I_kwDOJ0Z1Ps6ASHbc
2,726
Ollama 01.26 embeddings, alternative Models?
{ "login": "Daniel07n", "id": 17878323, "node_id": "MDQ6VXNlcjE3ODc4MzIz", "avatar_url": "https://avatars.githubusercontent.com/u/17878323?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Daniel07n", "html_url": "https://github.com/Daniel07n", "followers_url": "https://api.github.com/users/Daniel07n/followers", "following_url": "https://api.github.com/users/Daniel07n/following{/other_user}", "gists_url": "https://api.github.com/users/Daniel07n/gists{/gist_id}", "starred_url": "https://api.github.com/users/Daniel07n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Daniel07n/subscriptions", "organizations_url": "https://api.github.com/users/Daniel07n/orgs", "repos_url": "https://api.github.com/users/Daniel07n/repos", "events_url": "https://api.github.com/users/Daniel07n/events{/privacy}", "received_events_url": "https://api.github.com/users/Daniel07n/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
10
2024-02-24T09:35:39
2024-04-02T17:21:43
2024-03-12T04:50:27
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, is there the possibility to load alternative embedding models other than BERT and Nomic? Like for the larger LLMs either via the list shown on Ollama.com or as a manual download from Hugginface?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2726/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/2726/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/183
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/183/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/183/comments
https://api.github.com/repos/ollama/ollama/issues/183/events
https://github.com/ollama/ollama/issues/183
1,817,200,381
I_kwDOJ0Z1Ps5sUEb9
183
User should be able to find models that support commercial use or at least search by license type
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.github.com/users/technovangelist/followers", "following_url": "https://api.github.com/users/technovangelist/following{/other_user}", "gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}", "starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions", "organizations_url": "https://api.github.com/users/technovangelist/orgs", "repos_url": "https://api.github.com/users/technovangelist/repos", "events_url": "https://api.github.com/users/technovangelist/events{/privacy}", "received_events_url": "https://api.github.com/users/technovangelist/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2023-07-23T16:52:30
2023-08-30T21:36:58
2023-08-30T21:36:58
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Some of the license types allow commercial use. Today the user needs to go to other platforms to see if a model works for them. They should be able to stay at the ollama command line to get the basic info like gpl vs apache vs whatever else
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/183/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2526
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2526/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2526/comments
https://api.github.com/repos/ollama/ollama/issues/2526/events
https://github.com/ollama/ollama/pull/2526
2,137,504,500
PR_kwDOJ0Z1Ps5nB2N-
2,526
Harden the OLLAMA_HOST lookup for quotes
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-02-15T21:47:30
2024-02-15T22:13:42
2024-02-15T22:13:40
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2526", "html_url": "https://github.com/ollama/ollama/pull/2526", "diff_url": "https://github.com/ollama/ollama/pull/2526.diff", "patch_url": "https://github.com/ollama/ollama/pull/2526.patch", "merged_at": "2024-02-15T22:13:40" }
Fixes #2512
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2526/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1339
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1339/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1339/comments
https://api.github.com/repos/ollama/ollama/issues/1339/events
https://github.com/ollama/ollama/issues/1339
2,020,001,954
I_kwDOJ0Z1Ps54Zsii
1,339
MacOS opens kernel tasks doesn't unload model
{ "login": "igorcosta", "id": 1169752, "node_id": "MDQ6VXNlcjExNjk3NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/1169752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/igorcosta", "html_url": "https://github.com/igorcosta", "followers_url": "https://api.github.com/users/igorcosta/followers", "following_url": "https://api.github.com/users/igorcosta/following{/other_user}", "gists_url": "https://api.github.com/users/igorcosta/gists{/gist_id}", "starred_url": "https://api.github.com/users/igorcosta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/igorcosta/subscriptions", "organizations_url": "https://api.github.com/users/igorcosta/orgs", "repos_url": "https://api.github.com/users/igorcosta/repos", "events_url": "https://api.github.com/users/igorcosta/events{/privacy}", "received_events_url": "https://api.github.com/users/igorcosta/received_events", "type": "User", "user_view_type": "public", "site_admin": true }
[]
closed
false
null
[]
null
11
2023-12-01T03:50:57
2024-08-06T07:35:43
2024-01-26T22:28:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
One of the things that makes me cringe is when swapping between models, it never releases the memory when I'm done using it. It's just piles up and I eventually have to restart my Mac. Would memory optimisation being a target for next release?
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1339/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3776
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3776/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3776/comments
https://api.github.com/repos/ollama/ollama/issues/3776/events
https://github.com/ollama/ollama/issues/3776
2,254,515,521
I_kwDOJ0Z1Ps6GYS1B
3,776
Manifest error, no such host found.
{ "login": "harshaelon", "id": 128384441, "node_id": "U_kgDOB6b9uQ", "avatar_url": "https://avatars.githubusercontent.com/u/128384441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harshaelon", "html_url": "https://github.com/harshaelon", "followers_url": "https://api.github.com/users/harshaelon/followers", "following_url": "https://api.github.com/users/harshaelon/following{/other_user}", "gists_url": "https://api.github.com/users/harshaelon/gists{/gist_id}", "starred_url": "https://api.github.com/users/harshaelon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harshaelon/subscriptions", "organizations_url": "https://api.github.com/users/harshaelon/orgs", "repos_url": "https://api.github.com/users/harshaelon/repos", "events_url": "https://api.github.com/users/harshaelon/events{/privacy}", "received_events_url": "https://api.github.com/users/harshaelon/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw", "url": "https://api.github.com/repos/ollama/ollama/labels/networking", "name": "networking", "color": "0B5368", "default": false, "description": "Issues relating to ollama pull and push" } ]
closed
false
null
[]
null
1
2024-04-20T11:42:09
2024-05-02T00:22:32
2024-05-02T00:22:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I was actually running the ollama and was trying to use it with llama2, but i was not able to run or proceed further. here is the screenshot, any help would be highly appreciated ![image](https://github.com/ollama/ollama/assets/128384441/00133a19-dc50-4b8a-8e4a-5174a00a1a2c) . ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version ollama version is 0.1.32
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3776/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3776/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/658
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/658/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/658/comments
https://api.github.com/repos/ollama/ollama/issues/658/events
https://github.com/ollama/ollama/pull/658
1,920,230,913
PR_kwDOJ0Z1Ps5blkmB
658
Add colab badge
{ "login": "bitsnaps", "id": 1217741, "node_id": "MDQ6VXNlcjEyMTc3NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1217741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bitsnaps", "html_url": "https://github.com/bitsnaps", "followers_url": "https://api.github.com/users/bitsnaps/followers", "following_url": "https://api.github.com/users/bitsnaps/following{/other_user}", "gists_url": "https://api.github.com/users/bitsnaps/gists{/gist_id}", "starred_url": "https://api.github.com/users/bitsnaps/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bitsnaps/subscriptions", "organizations_url": "https://api.github.com/users/bitsnaps/orgs", "repos_url": "https://api.github.com/users/bitsnaps/repos", "events_url": "https://api.github.com/users/bitsnaps/events{/privacy}", "received_events_url": "https://api.github.com/users/bitsnaps/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-09-30T11:57:03
2023-10-06T09:31:23
2023-10-01T05:39:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/658", "html_url": "https://github.com/ollama/ollama/pull/658", "diff_url": "https://github.com/ollama/ollama/pull/658.diff", "patch_url": "https://github.com/ollama/ollama/pull/658.patch", "merged_at": null }
Update README to add a working colab Notbook, tested using the T4 with GPU support for free.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/658/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4070
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4070/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4070/comments
https://api.github.com/repos/ollama/ollama/issues/4070/events
https://github.com/ollama/ollama/issues/4070
2,272,867,624
I_kwDOJ0Z1Ps6HeTUo
4,070
Ollama run model error
{ "login": "pandaymx", "id": 82139672, "node_id": "MDQ6VXNlcjgyMTM5Njcy", "avatar_url": "https://avatars.githubusercontent.com/u/82139672?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pandaymx", "html_url": "https://github.com/pandaymx", "followers_url": "https://api.github.com/users/pandaymx/followers", "following_url": "https://api.github.com/users/pandaymx/following{/other_user}", "gists_url": "https://api.github.com/users/pandaymx/gists{/gist_id}", "starred_url": "https://api.github.com/users/pandaymx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pandaymx/subscriptions", "organizations_url": "https://api.github.com/users/pandaymx/orgs", "repos_url": "https://api.github.com/users/pandaymx/repos", "events_url": "https://api.github.com/users/pandaymx/events{/privacy}", "received_events_url": "https://api.github.com/users/pandaymx/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA", "url": "https://api.github.com/repos/ollama/ollama/labels/amd", "name": "amd", "color": "000000", "default": false, "description": "Issues relating to AMD GPUs and ROCm" }, { "id": 6677745918, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g", "url": "https://api.github.com/repos/ollama/ollama/labels/gpu", "name": "gpu", "color": "76C49E", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
6
2024-05-01T03:33:49
2024-05-02T16:20:41
2024-05-02T16:20:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ## question I change env because my gpu don't support. My gpu version is AMD RX 6750 gre <img width="465" alt="PixPin_2024-05-01_11-25-24" src="https://github.com/ollama/ollama/assets/82139672/19627989-f37b-44e7-aeb6-47c02db8b0f3"> <img width="1015" alt="PixPin_2024-05-01_11-26-58" src="https://github.com/ollama/ollama/assets/82139672/b8191847-1e43-4d15-b9f9-5fb4e3646883"> I try to erase the env have no effect ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.1.32
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4070/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/119
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/119/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/119/comments
https://api.github.com/repos/ollama/ollama/issues/119/events
https://github.com/ollama/ollama/issues/119
1,811,268,907
I_kwDOJ0Z1Ps5r9cUr
119
Where is the model file stored?
{ "login": "happy15", "id": 983570, "node_id": "MDQ6VXNlcjk4MzU3MA==", "avatar_url": "https://avatars.githubusercontent.com/u/983570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/happy15", "html_url": "https://github.com/happy15", "followers_url": "https://api.github.com/users/happy15/followers", "following_url": "https://api.github.com/users/happy15/following{/other_user}", "gists_url": "https://api.github.com/users/happy15/gists{/gist_id}", "starred_url": "https://api.github.com/users/happy15/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/happy15/subscriptions", "organizations_url": "https://api.github.com/users/happy15/orgs", "repos_url": "https://api.github.com/users/happy15/repos", "events_url": "https://api.github.com/users/happy15/events{/privacy}", "received_events_url": "https://api.github.com/users/happy15/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
8
2023-07-19T06:43:23
2024-02-04T08:20:25
2023-07-19T06:45:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, first thanks for the awesome work. Just wondering, where is the model file located?
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/119/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/988
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/988/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/988/comments
https://api.github.com/repos/ollama/ollama/issues/988/events
https://github.com/ollama/ollama/pull/988
1,976,472,811
PR_kwDOJ0Z1Ps5ejUMQ
988
Add `encode` and `decode` API endpoints
{ "login": "samdevbr", "id": 34373264, "node_id": "MDQ6VXNlcjM0MzczMjY0", "avatar_url": "https://avatars.githubusercontent.com/u/34373264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samdevbr", "html_url": "https://github.com/samdevbr", "followers_url": "https://api.github.com/users/samdevbr/followers", "following_url": "https://api.github.com/users/samdevbr/following{/other_user}", "gists_url": "https://api.github.com/users/samdevbr/gists{/gist_id}", "starred_url": "https://api.github.com/users/samdevbr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samdevbr/subscriptions", "organizations_url": "https://api.github.com/users/samdevbr/orgs", "repos_url": "https://api.github.com/users/samdevbr/repos", "events_url": "https://api.github.com/users/samdevbr/events{/privacy}", "received_events_url": "https://api.github.com/users/samdevbr/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
5
2023-11-03T15:42:18
2023-11-16T16:03:34
2023-11-14T12:52:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/988", "html_url": "https://github.com/ollama/ollama/pull/988", "diff_url": "https://github.com/ollama/ollama/pull/988.diff", "patch_url": "https://github.com/ollama/ollama/pull/988.patch", "merged_at": null }
While working on a POC project for the company I work at I've come across the need for encoding and decoding prompts. We are building a long-term memory POC and that requires token management, as of now we cannot predict how long the token list of a prompt might be. This PR creates the following endpoints: - `/api/encode` - `/api/decode` Both endpoints together with the existing ones will give us the flexibility to smartly manage prompt tokens.
{ "login": "samdevbr", "id": 34373264, "node_id": "MDQ6VXNlcjM0MzczMjY0", "avatar_url": "https://avatars.githubusercontent.com/u/34373264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samdevbr", "html_url": "https://github.com/samdevbr", "followers_url": "https://api.github.com/users/samdevbr/followers", "following_url": "https://api.github.com/users/samdevbr/following{/other_user}", "gists_url": "https://api.github.com/users/samdevbr/gists{/gist_id}", "starred_url": "https://api.github.com/users/samdevbr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samdevbr/subscriptions", "organizations_url": "https://api.github.com/users/samdevbr/orgs", "repos_url": "https://api.github.com/users/samdevbr/repos", "events_url": "https://api.github.com/users/samdevbr/events{/privacy}", "received_events_url": "https://api.github.com/users/samdevbr/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/988/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/988/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3730
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3730/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3730/comments
https://api.github.com/repos/ollama/ollama/issues/3730/events
https://github.com/ollama/ollama/issues/3730
2,250,331,970
I_kwDOJ0Z1Ps6GIVdC
3,730
升级最新版启动报错 - windows subprocess crash on 0.1.32
{ "login": "hyanqing1", "id": 26663452, "node_id": "MDQ6VXNlcjI2NjYzNDUy", "avatar_url": "https://avatars.githubusercontent.com/u/26663452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyanqing1", "html_url": "https://github.com/hyanqing1", "followers_url": "https://api.github.com/users/hyanqing1/followers", "following_url": "https://api.github.com/users/hyanqing1/following{/other_user}", "gists_url": "https://api.github.com/users/hyanqing1/gists{/gist_id}", "starred_url": "https://api.github.com/users/hyanqing1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hyanqing1/subscriptions", "organizations_url": "https://api.github.com/users/hyanqing1/orgs", "repos_url": "https://api.github.com/users/hyanqing1/repos", "events_url": "https://api.github.com/users/hyanqing1/events{/privacy}", "received_events_url": "https://api.github.com/users/hyanqing1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
7
2024-04-18T10:30:25
2024-05-21T18:22:11
2024-05-21T18:22:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 升级了最新版本0.1.32,启动报错,错误如下: Error: llama runner process no longer running: 3221225785 后来又重装了0.1.31版本,正常启动。 我的是windows10系统 ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.1.32
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3730/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3238
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3238/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3238/comments
https://api.github.com/repos/ollama/ollama/issues/3238/events
https://github.com/ollama/ollama/issues/3238
2,194,261,265
I_kwDOJ0Z1Ps6CycUR
3,238
Add a google colab notebook link to the github for new users.
{ "login": "jquintanilla4", "id": 32947277, "node_id": "MDQ6VXNlcjMyOTQ3Mjc3", "avatar_url": "https://avatars.githubusercontent.com/u/32947277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jquintanilla4", "html_url": "https://github.com/jquintanilla4", "followers_url": "https://api.github.com/users/jquintanilla4/followers", "following_url": "https://api.github.com/users/jquintanilla4/following{/other_user}", "gists_url": "https://api.github.com/users/jquintanilla4/gists{/gist_id}", "starred_url": "https://api.github.com/users/jquintanilla4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jquintanilla4/subscriptions", "organizations_url": "https://api.github.com/users/jquintanilla4/orgs", "repos_url": "https://api.github.com/users/jquintanilla4/repos", "events_url": "https://api.github.com/users/jquintanilla4/events{/privacy}", "received_events_url": "https://api.github.com/users/jquintanilla4/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
4
2024-03-19T07:47:01
2024-04-23T05:18:44
2024-03-19T09:11:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? Every once in a while people will ask about how to get Ollama running on Google Colab, either for doing dev work inside of Colab or as a remote GPU. I think if the github repo had a one click button to a notebook, it would solve this evergreen question on the community. ### How should we solve this? I created a notebook and tested it to solve this issue. https://colab.research.google.com/drive/1-K00WnTdDJC2JCuFNLKKa-ScmcDWi1Nw?usp=sharing ### What is the impact of not solving this? Always pasting the link on discord. ### Anything else? Happy to make edits.
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3238/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/2674
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2674/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2674/comments
https://api.github.com/repos/ollama/ollama/issues/2674/events
https://github.com/ollama/ollama/pull/2674
2,148,836,170
PR_kwDOJ0Z1Ps5nooA5
2,674
Update Readme.md : Add Gemma to the table of supported example models
{ "login": "sethupavan12", "id": 60856766, "node_id": "MDQ6VXNlcjYwODU2NzY2", "avatar_url": "https://avatars.githubusercontent.com/u/60856766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sethupavan12", "html_url": "https://github.com/sethupavan12", "followers_url": "https://api.github.com/users/sethupavan12/followers", "following_url": "https://api.github.com/users/sethupavan12/following{/other_user}", "gists_url": "https://api.github.com/users/sethupavan12/gists{/gist_id}", "starred_url": "https://api.github.com/users/sethupavan12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sethupavan12/subscriptions", "organizations_url": "https://api.github.com/users/sethupavan12/orgs", "repos_url": "https://api.github.com/users/sethupavan12/repos", "events_url": "https://api.github.com/users/sethupavan12/events{/privacy}", "received_events_url": "https://api.github.com/users/sethupavan12/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-02-22T11:20:16
2024-02-22T18:08:17
2024-02-22T18:08:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2674", "html_url": "https://github.com/ollama/ollama/pull/2674", "diff_url": "https://github.com/ollama/ollama/pull/2674.diff", "patch_url": "https://github.com/ollama/ollama/pull/2674.patch", "merged_at": null }
Minor Adding the Google Gemma to the list
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2674/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1901
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1901/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1901/comments
https://api.github.com/repos/ollama/ollama/issues/1901/events
https://github.com/ollama/ollama/issues/1901
2,074,610,060
I_kwDOJ0Z1Ps57qAmM
1,901
"api/generate" stalls after some queries
{ "login": "oderwat", "id": 719156, "node_id": "MDQ6VXNlcjcxOTE1Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/719156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oderwat", "html_url": "https://github.com/oderwat", "followers_url": "https://api.github.com/users/oderwat/followers", "following_url": "https://api.github.com/users/oderwat/following{/other_user}", "gists_url": "https://api.github.com/users/oderwat/gists{/gist_id}", "starred_url": "https://api.github.com/users/oderwat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oderwat/subscriptions", "organizations_url": "https://api.github.com/users/oderwat/orgs", "repos_url": "https://api.github.com/users/oderwat/repos", "events_url": "https://api.github.com/users/oderwat/events{/privacy}", "received_events_url": "https://api.github.com/users/oderwat/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng", "url": "https://api.github.com/repos/ollama/ollama/labels/performance", "name": "performance", "color": "A5B5C6", "default": false, "description": "" } ]
closed
false
null
[]
null
8
2024-01-10T15:24:39
2024-03-14T12:58:16
2024-03-13T23:44:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have a strange phenomenon and can't get rid of it without a workaround: When I call "api/generate" with the same model regularly every some seconds (5s-15s) the API suddenly stops responding after 15-20 calls (which seems to depend on the model size?). This is reproducible with different models and with both: A WSL2 based server and my iMac based server (I could try it with an M1 Air too but didn't so far). When I run it on the iMac I have high CPU consumption while the API does not return the call. See this CPU display (it shows some of the last working queries until it freezes and does not reply): ![Snipaste_2024-01-10_13-51-59](https://github.com/jmorganca/ollama/assets/719156/f43bdac7-b162-446b-bbb1-77a757c2ec5a) When switching models for the generation or just create an embedding (using the endpoint) with a tiny model and an empty prompt in between, it does work endlessly with the same prompts and code. I am using current main and also tried to go back some commits, but it seems that this also happens with older commits. Is there anything I can do to get more information to find out what the problem may be? Specialities: I use `OLLAMA_HOST=0.0.0.0:11434 OLLAMA_ORIGINS="*"` on the server and call the API from JavaScript (actually WASM) using the fetch API. I did not try it with another type of HTTP client yet (and can't for this special applications use case).
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1901/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1398
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1398/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1398/comments
https://api.github.com/repos/ollama/ollama/issues/1398/events
https://github.com/ollama/ollama/issues/1398
2,028,690,428
I_kwDOJ0Z1Ps5461v8
1,398
Bug: API - Chat docs examples are using `api/generate` in URL instead of `api/chat`
{ "login": "calderonsamuel", "id": 19418298, "node_id": "MDQ6VXNlcjE5NDE4Mjk4", "avatar_url": "https://avatars.githubusercontent.com/u/19418298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/calderonsamuel", "html_url": "https://github.com/calderonsamuel", "followers_url": "https://api.github.com/users/calderonsamuel/followers", "following_url": "https://api.github.com/users/calderonsamuel/following{/other_user}", "gists_url": "https://api.github.com/users/calderonsamuel/gists{/gist_id}", "starred_url": "https://api.github.com/users/calderonsamuel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/calderonsamuel/subscriptions", "organizations_url": "https://api.github.com/users/calderonsamuel/orgs", "repos_url": "https://api.github.com/users/calderonsamuel/repos", "events_url": "https://api.github.com/users/calderonsamuel/events{/privacy}", "received_events_url": "https://api.github.com/users/calderonsamuel/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-12-06T14:31:30
2023-12-06T22:22:17
2023-12-06T20:10:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://github.com/jmorganca/ollama/blob/32f62fbb8e0b1ecb4ec8369586562abce86c8e50/docs/api.md?plain=1#L317-L327 https://github.com/jmorganca/ollama/blob/32f62fbb8e0b1ecb4ec8369586562abce86c8e50/docs/api.md?plain=1#L366-L384
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1398/reactions", "total_count": 1, "+1": 0, "-1": 1, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1398/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3865
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3865/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3865/comments
https://api.github.com/repos/ollama/ollama/issues/3865/events
https://github.com/ollama/ollama/pull/3865
2,260,256,162
PR_kwDOJ0Z1Ps5tjCrA
3,865
add OLLAMA_KEEP_ALIVE env variable to FAQ
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-04-24T03:58:24
2024-04-24T04:06:52
2024-04-24T04:06:51
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3865", "html_url": "https://github.com/ollama/ollama/pull/3865", "diff_url": "https://github.com/ollama/ollama/pull/3865.diff", "patch_url": "https://github.com/ollama/ollama/pull/3865.patch", "merged_at": "2024-04-24T04:06:51" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3865/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5322
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5322/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5322/comments
https://api.github.com/repos/ollama/ollama/issues/5322/events
https://github.com/ollama/ollama/issues/5322
2,377,849,904
I_kwDOJ0Z1Ps6Nuxww
5,322
Latest 0.1.47 pre-release seems to break every model
{ "login": "AncientMystic", "id": 62780271, "node_id": "MDQ6VXNlcjYyNzgwMjcx", "avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AncientMystic", "html_url": "https://github.com/AncientMystic", "followers_url": "https://api.github.com/users/AncientMystic/followers", "following_url": "https://api.github.com/users/AncientMystic/following{/other_user}", "gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}", "starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions", "organizations_url": "https://api.github.com/users/AncientMystic/orgs", "repos_url": "https://api.github.com/users/AncientMystic/repos", "events_url": "https://api.github.com/users/AncientMystic/events{/privacy}", "received_events_url": "https://api.github.com/users/AncientMystic/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-06-27T11:04:23
2024-06-27T11:48:40
2024-06-27T11:48:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Tried the pre-release and every single model i tried it with either outputs random code bits with a response or just outputs random code bits with no response. It seems to break literally every single model i have. Edit: nevermind, not sure what happened exactly but i reinstalled the same version after downgrading and upgrading again and everything is working normally again.
{ "login": "AncientMystic", "id": 62780271, "node_id": "MDQ6VXNlcjYyNzgwMjcx", "avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AncientMystic", "html_url": "https://github.com/AncientMystic", "followers_url": "https://api.github.com/users/AncientMystic/followers", "following_url": "https://api.github.com/users/AncientMystic/following{/other_user}", "gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}", "starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions", "organizations_url": "https://api.github.com/users/AncientMystic/orgs", "repos_url": "https://api.github.com/users/AncientMystic/repos", "events_url": "https://api.github.com/users/AncientMystic/events{/privacy}", "received_events_url": "https://api.github.com/users/AncientMystic/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5322/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4944
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4944/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4944/comments
https://api.github.com/repos/ollama/ollama/issues/4944/events
https://github.com/ollama/ollama/issues/4944
2,342,055,691
I_kwDOJ0Z1Ps6LmO8L
4,944
Ollama reports incorrect version and does not show up in System tray
{ "login": "VirtualZardoz", "id": 167669409, "node_id": "U_kgDOCf5uoQ", "avatar_url": "https://avatars.githubusercontent.com/u/167669409?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VirtualZardoz", "html_url": "https://github.com/VirtualZardoz", "followers_url": "https://api.github.com/users/VirtualZardoz/followers", "following_url": "https://api.github.com/users/VirtualZardoz/following{/other_user}", "gists_url": "https://api.github.com/users/VirtualZardoz/gists{/gist_id}", "starred_url": "https://api.github.com/users/VirtualZardoz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VirtualZardoz/subscriptions", "organizations_url": "https://api.github.com/users/VirtualZardoz/orgs", "repos_url": "https://api.github.com/users/VirtualZardoz/repos", "events_url": "https://api.github.com/users/VirtualZardoz/events{/privacy}", "received_events_url": "https://api.github.com/users/VirtualZardoz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
14
2024-06-09T06:41:39
2024-06-18T16:09:44
2024-06-18T16:09:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Ever since I upgraded Ollama to version 0.1.38, it continues reporting this as its version number: ``` [...]> ollama -v ollama version is 0.1.38 ``` Despite the fact that I have updated it to all the released versions since 0.1.38. In fact my current version should be reported as 0.1.42. Also, Ollama has stopped showing up in the System tray. Which seems to limit some controls I have over the application. I have done the usual before posting here: - Searched the Discord - Searched this Github - Uninstalled Ollama - Reinstalled Ollama Ollama has been installed with the package downloaded from the website. I run on Windows 11, Nvidia Geforce RTX 3090. ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.42
{ "login": "VirtualZardoz", "id": 167669409, "node_id": "U_kgDOCf5uoQ", "avatar_url": "https://avatars.githubusercontent.com/u/167669409?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VirtualZardoz", "html_url": "https://github.com/VirtualZardoz", "followers_url": "https://api.github.com/users/VirtualZardoz/followers", "following_url": "https://api.github.com/users/VirtualZardoz/following{/other_user}", "gists_url": "https://api.github.com/users/VirtualZardoz/gists{/gist_id}", "starred_url": "https://api.github.com/users/VirtualZardoz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VirtualZardoz/subscriptions", "organizations_url": "https://api.github.com/users/VirtualZardoz/orgs", "repos_url": "https://api.github.com/users/VirtualZardoz/repos", "events_url": "https://api.github.com/users/VirtualZardoz/events{/privacy}", "received_events_url": "https://api.github.com/users/VirtualZardoz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4944/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5277
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5277/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5277/comments
https://api.github.com/repos/ollama/ollama/issues/5277/events
https://github.com/ollama/ollama/issues/5277
2,373,096,132
I_kwDOJ0Z1Ps6NcpLE
5,277
"How to utilize the Ollama local model in Windows 10 to generate the same API link as OpenAI, enabling other programs to replace the GPT-4 link? Currently, entering 'ollama serve' in CMD generates the 'http://localhost:11434' link, but replacing this link with the GPT-4 link in applications does not work. Please provide a command to generate a link that supports replacing GPT-4."
{ "login": "windkwbs", "id": 129468439, "node_id": "U_kgDOB7eIFw", "avatar_url": "https://avatars.githubusercontent.com/u/129468439?v=4", "gravatar_id": "", "url": "https://api.github.com/users/windkwbs", "html_url": "https://github.com/windkwbs", "followers_url": "https://api.github.com/users/windkwbs/followers", "following_url": "https://api.github.com/users/windkwbs/following{/other_user}", "gists_url": "https://api.github.com/users/windkwbs/gists{/gist_id}", "starred_url": "https://api.github.com/users/windkwbs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/windkwbs/subscriptions", "organizations_url": "https://api.github.com/users/windkwbs/orgs", "repos_url": "https://api.github.com/users/windkwbs/repos", "events_url": "https://api.github.com/users/windkwbs/events{/privacy}", "received_events_url": "https://api.github.com/users/windkwbs/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-06-25T16:16:53
2024-07-24T19:03:27
2024-07-24T19:03:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
"How to utilize the Ollama local model in Windows 10 to generate the same API link as OpenAI, enabling other programs to replace the GPT-4 link? Currently, entering 'ollama serve' in CMD generates the 'http://localhost:11434/' link, but replacing this link with the GPT-4 link in applications does not work. Please provide a command to generate a link that supports replacing GPT-4.""How to utilize the Ollama local model in Windows 10 to generate the same API link as OpenAI, enabling other programs to replace the GPT-4 link? Currently, entering 'ollama serve' in CMD generates the 'http://localhost:11434/' link, but replacing this link with the GPT-4 link in applications does not work. Please provide a command to generate a link that supports replacing GPT-4."
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5277/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3899
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3899/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3899/comments
https://api.github.com/repos/ollama/ollama/issues/3899/events
https://github.com/ollama/ollama/pull/3899
2,262,481,897
PR_kwDOJ0Z1Ps5tqp7r
3,899
show ggml modelinfo through the show api
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-04-25T01:54:59
2024-07-12T03:36:34
2024-07-12T03:36:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3899", "html_url": "https://github.com/ollama/ollama/pull/3899", "diff_url": "https://github.com/ollama/ollama/pull/3899.diff", "patch_url": "https://github.com/ollama/ollama/pull/3899.patch", "merged_at": null }
This change exposes the GGML KVs and tensor data to make it easier to introspect a model.
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3899/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3899/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6194
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6194/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6194/comments
https://api.github.com/repos/ollama/ollama/issues/6194/events
https://github.com/ollama/ollama/issues/6194
2,450,037,869
I_kwDOJ0Z1Ps6SCJxt
6,194
Please add CodeShell to Ollama/library, as llama.cpp already supports it
{ "login": "vimBashMing", "id": 148437161, "node_id": "U_kgDOCNj4qQ", "avatar_url": "https://avatars.githubusercontent.com/u/148437161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vimBashMing", "html_url": "https://github.com/vimBashMing", "followers_url": "https://api.github.com/users/vimBashMing/followers", "following_url": "https://api.github.com/users/vimBashMing/following{/other_user}", "gists_url": "https://api.github.com/users/vimBashMing/gists{/gist_id}", "starred_url": "https://api.github.com/users/vimBashMing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vimBashMing/subscriptions", "organizations_url": "https://api.github.com/users/vimBashMing/orgs", "repos_url": "https://api.github.com/users/vimBashMing/repos", "events_url": "https://api.github.com/users/vimBashMing/events{/privacy}", "received_events_url": "https://api.github.com/users/vimBashMing/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
1
2024-08-06T06:17:52
2024-08-17T02:50:12
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, The codeshell model: https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4 Since CodeShell is already supported by llama.cpp, please help add CodeShell to ollama/library. Thanks! <img width="896" alt="image" src="https://github.com/user-attachments/assets/0b125b59-17f0-44a4-83e7-98a13f849543">
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6194/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4179
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4179/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4179/comments
https://api.github.com/repos/ollama/ollama/issues/4179/events
https://github.com/ollama/ollama/issues/4179
2,279,707,154
I_kwDOJ0Z1Ps6H4ZIS
4,179
pull qwen:32b-chat-v1.5-q4_0 Error: unexpected end of JSON input
{ "login": "MarkWard0110", "id": 90335263, "node_id": "MDQ6VXNlcjkwMzM1MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarkWard0110", "html_url": "https://github.com/MarkWard0110", "followers_url": "https://api.github.com/users/MarkWard0110/followers", "following_url": "https://api.github.com/users/MarkWard0110/following{/other_user}", "gists_url": "https://api.github.com/users/MarkWard0110/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarkWard0110/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarkWard0110/subscriptions", "organizations_url": "https://api.github.com/users/MarkWard0110/orgs", "repos_url": "https://api.github.com/users/MarkWard0110/repos", "events_url": "https://api.github.com/users/MarkWard0110/events{/privacy}", "received_events_url": "https://api.github.com/users/MarkWard0110/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-05-05T20:06:55
2024-05-06T18:33:54
2024-05-06T18:33:54
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? `ollama pull qwen:32b-chat-v1.5-q4_0` results in `Error: unepxected end of JSON input` However, `ollama pull qwen:32b` works (right now they point to the same hash) ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4179/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4387
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4387/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4387/comments
https://api.github.com/repos/ollama/ollama/issues/4387/events
https://github.com/ollama/ollama/pull/4387
2,291,562,241
PR_kwDOJ0Z1Ps5vMQ3Z
4,387
Correct typos.
{ "login": "fangtaosong", "id": 59201842, "node_id": "MDQ6VXNlcjU5MjAxODQy", "avatar_url": "https://avatars.githubusercontent.com/u/59201842?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fangtaosong", "html_url": "https://github.com/fangtaosong", "followers_url": "https://api.github.com/users/fangtaosong/followers", "following_url": "https://api.github.com/users/fangtaosong/following{/other_user}", "gists_url": "https://api.github.com/users/fangtaosong/gists{/gist_id}", "starred_url": "https://api.github.com/users/fangtaosong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fangtaosong/subscriptions", "organizations_url": "https://api.github.com/users/fangtaosong/orgs", "repos_url": "https://api.github.com/users/fangtaosong/repos", "events_url": "https://api.github.com/users/fangtaosong/events{/privacy}", "received_events_url": "https://api.github.com/users/fangtaosong/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-05-13T00:04:01
2024-05-13T01:21:11
2024-05-13T01:21:11
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4387", "html_url": "https://github.com/ollama/ollama/pull/4387", "diff_url": "https://github.com/ollama/ollama/pull/4387.diff", "patch_url": "https://github.com/ollama/ollama/pull/4387.patch", "merged_at": "2024-05-13T01:21:11" }
ASSSISTANT --> ASSISTANT
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4387/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4387/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/71
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/71/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/71/comments
https://api.github.com/repos/ollama/ollama/issues/71/events
https://github.com/ollama/ollama/pull/71
1,799,943,250
PR_kwDOJ0Z1Ps5VQYsF
71
error checking new model
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-07-12T00:09:31
2023-07-12T16:20:40
2023-07-12T16:20:33
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/71", "html_url": "https://github.com/ollama/ollama/pull/71", "diff_url": "https://github.com/ollama/ollama/pull/71.diff", "patch_url": "https://github.com/ollama/ollama/pull/71.patch", "merged_at": "2023-07-12T16:20:33" }
check nil to prevent later nil pointer dereferences
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/71/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/71/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4803
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4803/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4803/comments
https://api.github.com/repos/ollama/ollama/issues/4803/events
https://github.com/ollama/ollama/issues/4803
2,332,454,660
I_kwDOJ0Z1Ps6LBm8E
4,803
Run chat api with Llama3 8B Model converted by llama.cpp had infinity response time
{ "login": "cuongnguyengit", "id": 45245565, "node_id": "MDQ6VXNlcjQ1MjQ1NTY1", "avatar_url": "https://avatars.githubusercontent.com/u/45245565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cuongnguyengit", "html_url": "https://github.com/cuongnguyengit", "followers_url": "https://api.github.com/users/cuongnguyengit/followers", "following_url": "https://api.github.com/users/cuongnguyengit/following{/other_user}", "gists_url": "https://api.github.com/users/cuongnguyengit/gists{/gist_id}", "starred_url": "https://api.github.com/users/cuongnguyengit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cuongnguyengit/subscriptions", "organizations_url": "https://api.github.com/users/cuongnguyengit/orgs", "repos_url": "https://api.github.com/users/cuongnguyengit/repos", "events_url": "https://api.github.com/users/cuongnguyengit/events{/privacy}", "received_events_url": "https://api.github.com/users/cuongnguyengit/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-06-04T03:26:14
2024-06-05T20:45:36
2024-06-05T20:45:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi team, I used your guide (https://github.com/ollama/ollama/blob/main/docs/import.md) to convert https://huggingface.co/hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode to gguf file. All of conversions were ok but when I run with ollama I get the following error: llama_new_context_with_model: graph splits = 5 {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"139847421562880","timestamp":1717470350} {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"139847421562880","timestamp":1717470350} {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"139847421562880","timestamp":1717470350} {"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses","tid":"139847421562880","timestamp":1717470350} {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"71","port":"24108","tid":"139847421562880","timestamp":1717470350} {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"139847421562880","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"139847421562880","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6688,"status":200,"tid":"139843732500480","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6690,"status":200,"tid":"139843724107776","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6692,"status":200,"tid":"139843707322368","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6694,"status":200,"tid":"139843715715072","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6696,"status":200,"tid":"139843698929664","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6698,"status":200,"tid":"139843690536960","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6810,"status":200,"tid":"139843598282752","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6810,"status":200,"tid":"139843598282752","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6802,"status":200,"tid":"139843589890048","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":6810,"status":200,"tid":"139843598282752","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":9,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6776,"status":200,"tid":"139843581497344","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":10,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6810,"status":200,"tid":"139843598282752","timestamp":1717470350} {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":11,"tid":"139847421562880","timestamp":1717470350} {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":188,"slot_id":0,"task_id":11,"tid":"139847421562880","timestamp":1717470350} {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":11,"tid":"139847421562880","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":13,"tid":"139847421562880","timestamp":1717470350} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6804,"status":200,"tid":"139843573104640","timestamp":1717470350} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":34,"tid":"139847421562880","timestamp":1717470351} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6752,"status":200,"tid":"139843564711936","timestamp":1717470351} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":44,"tid":"139847421562880","timestamp":1717470351} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6788,"status":200,"tid":"139843556319232","timestamp":1717470351} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":47,"tid":"139847421562880","timestamp":1717470351} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6756,"status":200,"tid":"139843547926528","timestamp":1717470351} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":52,"tid":"139847421562880","timestamp":1717470351} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6790,"status":200,"tid":"139843539533824","timestamp":1717470351} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":72,"tid":"139847421562880","timestamp":1717470352} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6764,"status":200,"tid":"139843531141120","timestamp":1717470352} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":97,"tid":"139847421562880","timestamp":1717470353} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":6772,"status":200,"tid":"139843682144256","timestamp":1717470353} ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.32
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4803/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1262
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1262/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1262/comments
https://api.github.com/repos/ollama/ollama/issues/1262/events
https://github.com/ollama/ollama/pull/1262
2,009,175,048
PR_kwDOJ0Z1Ps5gR4sX
1,262
windows CUDA support
{ "login": "vinjn", "id": 558657, "node_id": "MDQ6VXNlcjU1ODY1Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/558657?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vinjn", "html_url": "https://github.com/vinjn", "followers_url": "https://api.github.com/users/vinjn/followers", "following_url": "https://api.github.com/users/vinjn/following{/other_user}", "gists_url": "https://api.github.com/users/vinjn/gists{/gist_id}", "starred_url": "https://api.github.com/users/vinjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinjn/subscriptions", "organizations_url": "https://api.github.com/users/vinjn/orgs", "repos_url": "https://api.github.com/users/vinjn/repos", "events_url": "https://api.github.com/users/vinjn/events{/privacy}", "received_events_url": "https://api.github.com/users/vinjn/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-11-24T06:26:31
2023-12-12T19:00:27
2023-11-24T22:16:36
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1262", "html_url": "https://github.com/ollama/ollama/pull/1262", "diff_url": "https://github.com/ollama/ollama/pull/1262.diff", "patch_url": "https://github.com/ollama/ollama/pull/1262.patch", "merged_at": "2023-11-24T22:16:36" }
Fix #403 - Support cuda build in Windows - Import "containerd/console" lib to support colorful output in Windows terminal ![image](https://github.com/jmorganca/ollama/assets/558657/0018234e-e61c-4c55-a627-dd667ffbbbdf)
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1262/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3709
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3709/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3709/comments
https://api.github.com/repos/ollama/ollama/issues/3709/events
https://github.com/ollama/ollama/pull/3709
2,249,167,308
PR_kwDOJ0Z1Ps5s-GEt
3,709
Adds support for customizing GPU build flags in llama.cpp
{ "login": "remy415", "id": 105550370, "node_id": "U_kgDOBkqSIg", "avatar_url": "https://avatars.githubusercontent.com/u/105550370?v=4", "gravatar_id": "", "url": "https://api.github.com/users/remy415", "html_url": "https://github.com/remy415", "followers_url": "https://api.github.com/users/remy415/followers", "following_url": "https://api.github.com/users/remy415/following{/other_user}", "gists_url": "https://api.github.com/users/remy415/gists{/gist_id}", "starred_url": "https://api.github.com/users/remy415/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/remy415/subscriptions", "organizations_url": "https://api.github.com/users/remy415/orgs", "repos_url": "https://api.github.com/users/remy415/repos", "events_url": "https://api.github.com/users/remy415/events{/privacy}", "received_events_url": "https://api.github.com/users/remy415/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2024-04-17T20:03:02
2024-04-23T16:30:31
2024-04-23T16:28:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3709", "html_url": "https://github.com/ollama/ollama/pull/3709", "diff_url": "https://github.com/ollama/ollama/pull/3709.diff", "patch_url": "https://github.com/ollama/ollama/pull/3709.patch", "merged_at": "2024-04-23T16:28:34" }
Appends OLLAMA_CUSTOM_GPU_DEFS to CMAKE_DEFS. Will override any previously set build flags, allows for customizing GPU options when building from source.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3709/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5545
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5545/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5545/comments
https://api.github.com/repos/ollama/ollama/issues/5545/events
https://github.com/ollama/ollama/issues/5545
2,395,793,435
I_kwDOJ0Z1Ps6OzOgb
5,545
OpenAI v1/completion throws an error when passing list of strings to stop parameter.
{ "login": "chigkim", "id": 22120994, "node_id": "MDQ6VXNlcjIyMTIwOTk0", "avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chigkim", "html_url": "https://github.com/chigkim", "followers_url": "https://api.github.com/users/chigkim/followers", "following_url": "https://api.github.com/users/chigkim/following{/other_user}", "gists_url": "https://api.github.com/users/chigkim/gists{/gist_id}", "starred_url": "https://api.github.com/users/chigkim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chigkim/subscriptions", "organizations_url": "https://api.github.com/users/chigkim/orgs", "repos_url": "https://api.github.com/users/chigkim/repos", "events_url": "https://api.github.com/users/chigkim/events{/privacy}", "received_events_url": "https://api.github.com/users/chigkim/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-07-08T14:23:48
2024-07-10T00:59:12
2024-07-09T21:01:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The new OpenAI v1/completion, (not chat.completion) throws an error if you pass list of strings to stop parameter. ```python from openai import OpenAI client = OpenAI(base_url=base_url, api_key=api_key) prompt = """User: Hello, Assistant: Hi, how can I help you? User: How's it going? Assistant:""" model="llama3:text" stop = ["User:", "Assistant:"] # Triggers an error stop = "User:" # No error response = client.completions.create(prompt=prompt, model=model, max_tokens=128, stop=stop) ``` error Error code: 400 - {'error': {'message': "invalid type for 'stop' field: []interface {}", 'type': 'invalid_request_error', 'param': None, 'code': None}} ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.49
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5545/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/128
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/128/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/128/comments
https://api.github.com/repos/ollama/ollama/issues/128/events
https://github.com/ollama/ollama/pull/128
1,812,513,931
PR_kwDOJ0Z1Ps5V7ERj
128
Update modelfile.md
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-07-19T18:38:21
2023-12-05T23:52:44
2023-07-19T20:40:39
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/128", "html_url": "https://github.com/ollama/ollama/pull/128", "diff_url": "https://github.com/ollama/ollama/pull/128.diff", "patch_url": "https://github.com/ollama/ollama/pull/128.patch", "merged_at": "2023-07-19T20:40:39" }
null
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/128/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4034
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4034/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4034/comments
https://api.github.com/repos/ollama/ollama/issues/4034/events
https://github.com/ollama/ollama/issues/4034
2,269,952,319
I_kwDOJ0Z1Ps6HTLk_
4,034
Implement downloads via torrents
{ "login": "f321x", "id": 51097237, "node_id": "MDQ6VXNlcjUxMDk3MjM3", "avatar_url": "https://avatars.githubusercontent.com/u/51097237?v=4", "gravatar_id": "", "url": "https://api.github.com/users/f321x", "html_url": "https://github.com/f321x", "followers_url": "https://api.github.com/users/f321x/followers", "following_url": "https://api.github.com/users/f321x/following{/other_user}", "gists_url": "https://api.github.com/users/f321x/gists{/gist_id}", "starred_url": "https://api.github.com/users/f321x/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/f321x/subscriptions", "organizations_url": "https://api.github.com/users/f321x/orgs", "repos_url": "https://api.github.com/users/f321x/repos", "events_url": "https://api.github.com/users/f321x/events{/privacy}", "received_events_url": "https://api.github.com/users/f321x/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6896227207, "node_id": "LA_kwDOJ0Z1Ps8AAAABmwwThw", "url": "https://api.github.com/repos/ollama/ollama/labels/registry", "name": "registry", "color": "0052cc", "default": false, "description": "" } ]
open
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
5
2024-04-29T20:51:10
2024-11-14T22:55:29
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Model downloads with a slow (10mbit) internet connection are really unreliable and crash around all 5-10gb for me (EOF max retries). At the same time huge torrents work very reliable. If you could implement a call to a external torrent client for model downloading or implement a torrent client the download experience would be more reliable, way faster and you could save costs on hosting the files. This library could be used to implement this: https://github.com/anacrolix/torrent Or simply create a subprocess call to transmission-cli (more hacky way)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4034/reactions", "total_count": 18, "+1": 18, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4034/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2895
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2895/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2895/comments
https://api.github.com/repos/ollama/ollama/issues/2895/events
https://github.com/ollama/ollama/issues/2895
2,165,423,108
I_kwDOJ0Z1Ps6BEbwE
2,895
May I add GBNF support?
{ "login": "josharian", "id": 67496, "node_id": "MDQ6VXNlcjY3NDk2", "avatar_url": "https://avatars.githubusercontent.com/u/67496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josharian", "html_url": "https://github.com/josharian", "followers_url": "https://api.github.com/users/josharian/followers", "following_url": "https://api.github.com/users/josharian/following{/other_user}", "gists_url": "https://api.github.com/users/josharian/gists{/gist_id}", "starred_url": "https://api.github.com/users/josharian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josharian/subscriptions", "organizations_url": "https://api.github.com/users/josharian/orgs", "repos_url": "https://api.github.com/users/josharian/repos", "events_url": "https://api.github.com/users/josharian/events{/privacy}", "received_events_url": "https://api.github.com/users/josharian/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-03-03T15:45:31
2024-03-03T18:47:17
2024-03-03T18:47:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi! I see you're drowning in issues and PRs here. :) Partly as a follow-up to #2623, I'd like to add support for arbitrary GBNF. I'm going to do this for myself regardless. The question is: Should I polish it and document it and upstream it? The trickiest API part is the command-line interface. The server can just grow a "gbnf" request param. The command line...maybe gbnf-file instead? (An alternative is curl-like, where it is just "gbnf", but a value starting with `@` causes it to be read from a file.) Or just omit it from the command line?
{ "login": "josharian", "id": 67496, "node_id": "MDQ6VXNlcjY3NDk2", "avatar_url": "https://avatars.githubusercontent.com/u/67496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josharian", "html_url": "https://github.com/josharian", "followers_url": "https://api.github.com/users/josharian/followers", "following_url": "https://api.github.com/users/josharian/following{/other_user}", "gists_url": "https://api.github.com/users/josharian/gists{/gist_id}", "starred_url": "https://api.github.com/users/josharian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josharian/subscriptions", "organizations_url": "https://api.github.com/users/josharian/orgs", "repos_url": "https://api.github.com/users/josharian/repos", "events_url": "https://api.github.com/users/josharian/events{/privacy}", "received_events_url": "https://api.github.com/users/josharian/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2895/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1094
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1094/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1094/comments
https://api.github.com/repos/ollama/ollama/issues/1094/events
https://github.com/ollama/ollama/issues/1094
1,989,128,993
I_kwDOJ0Z1Ps52j7Mh
1,094
Ambiguous state in google colab
{ "login": "ArsBinarii", "id": 6293391, "node_id": "MDQ6VXNlcjYyOTMzOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6293391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArsBinarii", "html_url": "https://github.com/ArsBinarii", "followers_url": "https://api.github.com/users/ArsBinarii/followers", "following_url": "https://api.github.com/users/ArsBinarii/following{/other_user}", "gists_url": "https://api.github.com/users/ArsBinarii/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArsBinarii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArsBinarii/subscriptions", "organizations_url": "https://api.github.com/users/ArsBinarii/orgs", "repos_url": "https://api.github.com/users/ArsBinarii/repos", "events_url": "https://api.github.com/users/ArsBinarii/events{/privacy}", "received_events_url": "https://api.github.com/users/ArsBinarii/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-11-11T21:44:03
2023-11-11T22:15:12
2023-11-11T22:15:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Google collab T4 Installed cuda 12.3 via: https://developer.nvidia.com/cuda-downloads now, nvidia-smi shows 12.0, but nvcc reports 12.3 run ollama via <pre> import os import threading from pyngrok import ngrok import subprocess import time def ollama(): os.environ['OLLAMA_HOST'] = '0.0.0.0:11434' os.environ['OLLAMA_ORIGINS'] = '*' subprocess.Popen(["ollama", "serve"]) def ngrok_tunnel(): # Wait for some time to ensure ollama is fully started time.sleep(10) port = "11434" public_url = ngrok.connect(port).public_url print(f" * ngrok tunnel {public_url} -> http://127.0.0.1:{port}") def monitor_gpu(): while True: print(subprocess.check_output(["nvidia-smi"]).decode("utf-8")) time.sleep(10) # adjust the sleep time to your preference # Create threads to run ollama, ngrok_tunnel, and monitor_gpu functions in the background ollama_thread = threading.Thread(target=ollama) ngrok_thread = threading.Thread(target=ngrok_tunnel) gpu_monitor_thread = threading.Thread(target=monitor_gpu) # Start the threads ollama_thread.start() ngrok_thread.start() gpu_monitor_thread.start() # Optional: To keep the Colab cell running, preventing the threads from exiting while True: pass </pre> download: wizard-vicuna-uncensored:30b via API call a simple prompt via API as per image it seems model is loaded in VGPU, but performance is low, top reports 99-100% CPU usage, there is some RAM usage and nvidia-smi reports 0 usage of the GPU <img width="1102" alt="image" src="https://github.com/jmorganca/ollama/assets/6293391/d3238fca-9365-412b-8e94-bc932fc21a71">
{ "login": "ArsBinarii", "id": 6293391, "node_id": "MDQ6VXNlcjYyOTMzOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6293391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArsBinarii", "html_url": "https://github.com/ArsBinarii", "followers_url": "https://api.github.com/users/ArsBinarii/followers", "following_url": "https://api.github.com/users/ArsBinarii/following{/other_user}", "gists_url": "https://api.github.com/users/ArsBinarii/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArsBinarii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArsBinarii/subscriptions", "organizations_url": "https://api.github.com/users/ArsBinarii/orgs", "repos_url": "https://api.github.com/users/ArsBinarii/repos", "events_url": "https://api.github.com/users/ArsBinarii/events{/privacy}", "received_events_url": "https://api.github.com/users/ArsBinarii/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1094/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1706
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1706/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1706/comments
https://api.github.com/repos/ollama/ollama/issues/1706/events
https://github.com/ollama/ollama/pull/1706
2,055,564,721
PR_kwDOJ0Z1Ps5ivFm9
1,706
Add Community Integration: Chatbox
{ "login": "Bin-Huang", "id": 20723142, "node_id": "MDQ6VXNlcjIwNzIzMTQy", "avatar_url": "https://avatars.githubusercontent.com/u/20723142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bin-Huang", "html_url": "https://github.com/Bin-Huang", "followers_url": "https://api.github.com/users/Bin-Huang/followers", "following_url": "https://api.github.com/users/Bin-Huang/following{/other_user}", "gists_url": "https://api.github.com/users/Bin-Huang/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bin-Huang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bin-Huang/subscriptions", "organizations_url": "https://api.github.com/users/Bin-Huang/orgs", "repos_url": "https://api.github.com/users/Bin-Huang/repos", "events_url": "https://api.github.com/users/Bin-Huang/events{/privacy}", "received_events_url": "https://api.github.com/users/Bin-Huang/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-12-25T09:46:38
2024-02-23T12:17:28
2024-02-23T12:17:28
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1706", "html_url": "https://github.com/ollama/ollama/pull/1706", "diff_url": "https://github.com/ollama/ollama/pull/1706.diff", "patch_url": "https://github.com/ollama/ollama/pull/1706.patch", "merged_at": "2024-02-23T12:17:28" }
Thank you so much for developing Ollama; it has made running llama2 on my Mac incredibly simple. I've completely forgotten how I used to handle all the dependencies myself. Recently, I've added support for Ollama's locally deployed models to my project [Chatbox](https://github.com/Bin-Huang/chatbox) (in the [latest release](https://github.com/Bin-Huang/chatbox/releases)), and now Chatbox + Ollama is just fantastic.🍻 ![Dec-25-2023 17-38-17](https://github.com/jmorganca/ollama/assets/20723142/75791fb3-5fc0-48ea-a7d9-3fbaf3ca0a3e)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1706/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1706/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4044
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4044/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4044/comments
https://api.github.com/repos/ollama/ollama/issues/4044/events
https://github.com/ollama/ollama/issues/4044
2,270,989,631
I_kwDOJ0Z1Ps6HXI0_
4,044
Problems with more GPUs using v0.1.33-rc5
{ "login": "cBrainAI", "id": 156695209, "node_id": "U_kgDOCVb6qQ", "avatar_url": "https://avatars.githubusercontent.com/u/156695209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cBrainAI", "html_url": "https://github.com/cBrainAI", "followers_url": "https://api.github.com/users/cBrainAI/followers", "following_url": "https://api.github.com/users/cBrainAI/following{/other_user}", "gists_url": "https://api.github.com/users/cBrainAI/gists{/gist_id}", "starred_url": "https://api.github.com/users/cBrainAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cBrainAI/subscriptions", "organizations_url": "https://api.github.com/users/cBrainAI/orgs", "repos_url": "https://api.github.com/users/cBrainAI/repos", "events_url": "https://api.github.com/users/cBrainAI/events{/privacy}", "received_events_url": "https://api.github.com/users/cBrainAI/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA", "url": "https://api.github.com/repos/ollama/ollama/labels/amd", "name": "amd", "color": "000000", "default": false, "description": "Issues relating to AMD GPUs and ROCm" }, { "id": 6677745918, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g", "url": "https://api.github.com/repos/ollama/ollama/labels/gpu", "name": "gpu", "color": "76C49E", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2024-04-30T09:36:56
2024-05-02T16:04:23
2024-05-02T16:04:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am testing the fantastic(!) new features with OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS in v0.1.33-rc5. I am running ollama using docker on a machine with two RTX4090. Unfortunately it seems like ollama does not use both graphic-cards with v0.1.33-rc5 - it has worked perfect with previous versions (have just tested with v0.1.32). It does not matter whether I set the environment variables, set them to 1 or set them to e.g. 4 As you can se in the log below - ollama detects the 2 GPU's ``` ollama | time=2024-04-30T09:37:32.070Z level=INFO source=images.go:821 msg="total blobs: 68" ollama | time=2024-04-30T09:37:32.071Z level=INFO source=images.go:828 msg="total unused blobs removed: 0" ollama | time=2024-04-30T09:37:32.071Z level=INFO source=routes.go:1074 msg="Listening on [::]:11434 (version 0.1.33-rc5)" ollama | time=2024-04-30T09:37:32.072Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3211222627/runners ollama | time=2024-04-30T09:37:34.328Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60002 cpu]" ollama | time=2024-04-30T09:37:34.328Z level=INFO source=gpu.go:96 msg="Detecting GPUs" ollama | time=2024-04-30T09:37:34.367Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama3211222627/runners/cuda_v11/libcudart.so.11.0 count=2 ollama | time=2024-04-30T09:37:34.367Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ``` But I can see (using `nvtop`) that only one GPU is used during prompt-evaluation ### OS Docker ### GPU Nvidia ### CPU AMD ### Ollama version v0.1.33-rc5
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4044/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2299
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2299/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2299/comments
https://api.github.com/repos/ollama/ollama/issues/2299/events
https://github.com/ollama/ollama/pull/2299
2,111,389,729
PR_kwDOJ0Z1Ps5lo9BF
2,299
use `llm.ImageData`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-02-01T02:57:46
2024-02-01T03:11:11
2024-02-01T03:11:11
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2299", "html_url": "https://github.com/ollama/ollama/pull/2299", "diff_url": "https://github.com/ollama/ollama/pull/2299.diff", "patch_url": "https://github.com/ollama/ollama/pull/2299.patch", "merged_at": "2024-02-01T03:11:11" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2299/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/695
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/695/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/695/comments
https://api.github.com/repos/ollama/ollama/issues/695/events
https://github.com/ollama/ollama/issues/695
1,925,546,597
I_kwDOJ0Z1Ps5yxYJl
695
Can't resume download (pull) on restart server
{ "login": "KcZLog", "id": 135950770, "node_id": "U_kgDOCBpxsg", "avatar_url": "https://avatars.githubusercontent.com/u/135950770?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KcZLog", "html_url": "https://github.com/KcZLog", "followers_url": "https://api.github.com/users/KcZLog/followers", "following_url": "https://api.github.com/users/KcZLog/following{/other_user}", "gists_url": "https://api.github.com/users/KcZLog/gists{/gist_id}", "starred_url": "https://api.github.com/users/KcZLog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KcZLog/subscriptions", "organizations_url": "https://api.github.com/users/KcZLog/orgs", "repos_url": "https://api.github.com/users/KcZLog/repos", "events_url": "https://api.github.com/users/KcZLog/events{/privacy}", "received_events_url": "https://api.github.com/users/KcZLog/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
8
2023-10-04T07:08:44
2024-12-16T17:05:01
2024-01-16T22:21:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Auto pruning on server start was added in #491 But this cause losing unfinished/failed download progress if restarting server Please change this to allow continuing downloads. Suggestions: 1. Don't auto prune, pruning on delete is probably enough? 2. If want auto prune, use seperate directory for unfinished download, or use name prefix (may orphan unfinished file if new version) 3. Use a file to list every downloads, and don't prune those files. If new version, update the list, previous unlisted files automatically get pruned. 4. Immediately create the manifest before download, to prevent pruning those files, and add + check property for incomplete download.
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/695/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/8669
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8669/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8669/comments
https://api.github.com/repos/ollama/ollama/issues/8669/events
https://github.com/ollama/ollama/issues/8669
2,818,980,252
I_kwDOJ0Z1Ps6oBjmc
8,669
deepseek-r1:32b do not support tools? qwen2.5 base model should support.
{ "login": "HuChundong", "id": 3194932, "node_id": "MDQ6VXNlcjMxOTQ5MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/3194932?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HuChundong", "html_url": "https://github.com/HuChundong", "followers_url": "https://api.github.com/users/HuChundong/followers", "following_url": "https://api.github.com/users/HuChundong/following{/other_user}", "gists_url": "https://api.github.com/users/HuChundong/gists{/gist_id}", "starred_url": "https://api.github.com/users/HuChundong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HuChundong/subscriptions", "organizations_url": "https://api.github.com/users/HuChundong/orgs", "repos_url": "https://api.github.com/users/HuChundong/repos", "events_url": "https://api.github.com/users/HuChundong/events{/privacy}", "received_events_url": "https://api.github.com/users/HuChundong/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2025-01-29T18:49:54
2025-01-29T20:29:43
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? when i use autogen, deepseek-r1:32b raise error: model do not support tools. ### OS WSL2 ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8669/timeline
null
null
false