url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/5445
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5445/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5445/comments
|
https://api.github.com/repos/ollama/ollama/issues/5445/events
|
https://github.com/ollama/ollama/pull/5445
| 2,387,212,285
|
PR_kwDOJ0Z1Ps50QEgP
| 5,445
|
Centos 7 is EOL - Switch to rocky 8 base
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-07-02T21:39:26
| 2024-07-02T21:39:26
| null |
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5445",
"html_url": "https://github.com/ollama/ollama/pull/5445",
"diff_url": "https://github.com/ollama/ollama/pull/5445.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5445.patch",
"merged_at": null
}
|
Centos 7 has hit EOL. The next best old glibc distro is rocky linux 8 which is already used for some of our base images. This switches the centos 7 base images over to rocky 8. This does mean our official binaries will no longer be compatible with distros that are older than glibc 2.28, however these are all EOL versions, so users can resort to building from source if they need obsolete distro support.
I'm not sure when we want to merge this, as it does narrow our support matrix for older distros.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5445/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5445/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3193
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3193/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3193/comments
|
https://api.github.com/repos/ollama/ollama/issues/3193/events
|
https://github.com/ollama/ollama/issues/3193
| 2,190,704,542
|
I_kwDOJ0Z1Ps6Ck3-e
| 3,193
|
Enhance Chat History Logging for API Interactions on Windows Deployment
|
{
"login": "Mingzefei",
"id": 92701892,
"node_id": "U_kgDOBYaExA",
"avatar_url": "https://avatars.githubusercontent.com/u/92701892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mingzefei",
"html_url": "https://github.com/Mingzefei",
"followers_url": "https://api.github.com/users/Mingzefei/followers",
"following_url": "https://api.github.com/users/Mingzefei/following{/other_user}",
"gists_url": "https://api.github.com/users/Mingzefei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mingzefei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mingzefei/subscriptions",
"organizations_url": "https://api.github.com/users/Mingzefei/orgs",
"repos_url": "https://api.github.com/users/Mingzefei/repos",
"events_url": "https://api.github.com/users/Mingzefei/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mingzefei/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-03-17T14:44:28
| 2024-10-09T02:11:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I've deployed Ollama on a Windows 10 server to make the API available within a local network. I've observed that the `.ollama/history` directory only records local command-line interactions, not capturing chats initiated over the network.
### How should we solve this?
A unified logging system should be implemented that captures both prompts and responses for all interactions, regardless of whether they are initiated locally or over the network. This change would eliminate the need to enable debugging options for basic chat history logging.
### What is the impact of not solving this?
The current limitation impacts the ability to audit and review interactions comprehensively, raising privacy concerns, especially when private information is shared via API chats. I'm currently relying on the partial logs available and manually enabling `OLLAMA_DEBUG=1` to track prompts without responses, which is not ideal.
### Anything else?
This issue stems from the discussions in this [Discord link](https://discordapp.com/channels/1128867683291627614/1218857328728608921).
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3193/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/192
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/192/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/192/comments
|
https://api.github.com/repos/ollama/ollama/issues/192/events
|
https://github.com/ollama/ollama/pull/192
| 1,818,777,845
|
PR_kwDOJ0Z1Ps5WQDH0
| 192
|
update development.md
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-24T16:44:41
| 2023-07-24T23:13:26
| 2023-07-24T23:13:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/192",
"html_url": "https://github.com/ollama/ollama/pull/192",
"diff_url": "https://github.com/ollama/ollama/pull/192.diff",
"patch_url": "https://github.com/ollama/ollama/pull/192.patch",
"merged_at": "2023-07-24T23:13:22"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/192/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1583
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1583/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1583/comments
|
https://api.github.com/repos/ollama/ollama/issues/1583/events
|
https://github.com/ollama/ollama/issues/1583
| 2,047,339,642
|
I_kwDOJ0Z1Ps56B-x6
| 1,583
|
Towards better Ollama
|
{
"login": "eramax",
"id": 542413,
"node_id": "MDQ6VXNlcjU0MjQxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eramax",
"html_url": "https://github.com/eramax",
"followers_url": "https://api.github.com/users/eramax/followers",
"following_url": "https://api.github.com/users/eramax/following{/other_user}",
"gists_url": "https://api.github.com/users/eramax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eramax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eramax/subscriptions",
"organizations_url": "https://api.github.com/users/eramax/orgs",
"repos_url": "https://api.github.com/users/eramax/repos",
"events_url": "https://api.github.com/users/eramax/events{/privacy}",
"received_events_url": "https://api.github.com/users/eramax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-12-18T19:46:47
| 2024-03-11T18:29:59
| 2024-03-11T18:29:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Because Ollama.cpp has improved user interactive features over llama.cpp, I prefer it.
I sincerely hope that you would expand Ollama to include other quantizations. As you are aware, this will require you to write some of the quantization algorithms yourself because they are all written in Python and require package dependencies, which you will avoid if you use compiled code.
I recommend checking out exllamav2, which is becoming more and more popular these days and is much faster than gguf while consuming the same amount of VRAM or less.
Another proposal is to ship Ollama with a user interface in addition to the console one. This is a very basic web application that can use the Ollama API, but it will be highly beneficial to the user, and if you don't want to put in extra work managing data, user data may be stored locally in the browser.
I'm not really familiar with how llama.cpp works or decides which layers to offload, but I believe that certain aspects of the model are less crucial than others, and it's possible that we can offload certain portions (my theory isn't supported by any knowledge).
Recently, I used Ollma a lot through my Colab account, and it works really well and quickly. However, I would prefer to be able to run Ollma without requiring a service; during installation, I can set it up to run as an app without a service, which will be much more efficient for my Jupyter notebook, as I can't get the same experience while it's on Colab.
https://gist.github.com/eramax/8533181ad841e4612041c42d154df003
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1583/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3214
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3214/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3214/comments
|
https://api.github.com/repos/ollama/ollama/issues/3214/events
|
https://github.com/ollama/ollama/pull/3214
| 2,191,394,266
|
PR_kwDOJ0Z1Ps5p5fCe
| 3,214
|
Windows automatically recognizes username
|
{
"login": "TCOTC",
"id": 78434827,
"node_id": "MDQ6VXNlcjc4NDM0ODI3",
"avatar_url": "https://avatars.githubusercontent.com/u/78434827?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TCOTC",
"html_url": "https://github.com/TCOTC",
"followers_url": "https://api.github.com/users/TCOTC/followers",
"following_url": "https://api.github.com/users/TCOTC/following{/other_user}",
"gists_url": "https://api.github.com/users/TCOTC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TCOTC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TCOTC/subscriptions",
"organizations_url": "https://api.github.com/users/TCOTC/orgs",
"repos_url": "https://api.github.com/users/TCOTC/repos",
"events_url": "https://api.github.com/users/TCOTC/events{/privacy}",
"received_events_url": "https://api.github.com/users/TCOTC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-18T07:06:35
| 2024-05-07T02:18:43
| 2024-05-06T22:03:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3214",
"html_url": "https://github.com/ollama/ollama/pull/3214",
"diff_url": "https://github.com/ollama/ollama/pull/3214.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3214.patch",
"merged_at": "2024-05-06T22:03:14"
}
|
get the current username by "%username%"
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3214/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8594
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8594/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8594/comments
|
https://api.github.com/repos/ollama/ollama/issues/8594/events
|
https://github.com/ollama/ollama/issues/8594
| 2,811,633,348
|
I_kwDOJ0Z1Ps6nlh7E
| 8,594
|
Ollama stops accessing GPU and Reverts to CPU after runing for extended periods
|
{
"login": "loca5790",
"id": 96643826,
"node_id": "U_kgDOBcKq8g",
"avatar_url": "https://avatars.githubusercontent.com/u/96643826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loca5790",
"html_url": "https://github.com/loca5790",
"followers_url": "https://api.github.com/users/loca5790/followers",
"following_url": "https://api.github.com/users/loca5790/following{/other_user}",
"gists_url": "https://api.github.com/users/loca5790/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loca5790/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loca5790/subscriptions",
"organizations_url": "https://api.github.com/users/loca5790/orgs",
"repos_url": "https://api.github.com/users/loca5790/repos",
"events_url": "https://api.github.com/users/loca5790/events{/privacy}",
"received_events_url": "https://api.github.com/users/loca5790/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2025-01-26T15:52:00
| 2025-01-27T16:01:22
| 2025-01-27T15:55:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have ollama set to be persistent in my VRAM based off my homeassistant usage. I moved to an RTX3090 and after sometimes 12 hours and other times a day plus Ollama will stop using the GPU and revert to CPU only. It then gets stuck spooling the CPU up for hours at a time without generating any response.
System is:
Ryzen 5700G
64GB Ram
RTX3090
Ollama is running via a docker compose:
```
services:
ollama:
volumes:
- ollama:/root/.ollama
container_name: ollama
pull_policy: if_not_present
tty: true
restart: unless-stopped
image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest}
ports:
- ${OLLAMA_WEBAPI_PORT-11434}:11434
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [gpu, compute, utility] #["gpu"]
count: all
environment:
- OLLAMA_DEBUG=1
- CUDA_VISIBLE_DEVICES=0 # Force use of the GPU
open-webui:
build:
context: .
args:
OLLAMA_BASE_URL: '/ollama'
dockerfile: Dockerfile
image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
container_name: open-webui
volumes:
- open-webui:/app/backend/data
depends_on:
- ollama
ports:
- ${OPEN_WEBUI_PORT-3000}:8080
environment:
- 'OLLAMA_BASE_URL=http://ollama:11434'
- 'WEBUI_SECRET_KEY='
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
volumes:
ollama: {}
open-webui: {}
```
I tried adding the CUDA_VISIBLE_DEVICES to force use of GPU.
A restart of the container will bring it back up and load it back into GPU. I have tried to stress test it by running multiple parallel conversation agents without any issues dropping the GPU.
There are no instances in the log that I can find where the GPU becomes unavailable or anything in debug. The only thing that alerts me in the log it has dropped the GPU is that on a request it will load the model and reference CPU.
It could be a me thing, but spent a few days now without luck. I've had this happen on two machines now.
Second machine this happened on:
Same docker compose setup running in a VM on ubuntu server.
RTX3060 running llava-phi3 as the model and not persistent only as requested
### OS
Docker, Linux
### GPU
Nvidia
### CPU
Intel, AMD
### Ollama version
0.5.4
|
{
"login": "loca5790",
"id": 96643826,
"node_id": "U_kgDOBcKq8g",
"avatar_url": "https://avatars.githubusercontent.com/u/96643826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loca5790",
"html_url": "https://github.com/loca5790",
"followers_url": "https://api.github.com/users/loca5790/followers",
"following_url": "https://api.github.com/users/loca5790/following{/other_user}",
"gists_url": "https://api.github.com/users/loca5790/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loca5790/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loca5790/subscriptions",
"organizations_url": "https://api.github.com/users/loca5790/orgs",
"repos_url": "https://api.github.com/users/loca5790/repos",
"events_url": "https://api.github.com/users/loca5790/events{/privacy}",
"received_events_url": "https://api.github.com/users/loca5790/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8594/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1279
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1279/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1279/comments
|
https://api.github.com/repos/ollama/ollama/issues/1279/events
|
https://github.com/ollama/ollama/issues/1279
| 2,011,204,862
|
I_kwDOJ0Z1Ps534Iz-
| 1,279
|
Support CPUs without AVX
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2023-11-26T21:05:30
| 2024-01-24T23:44:43
| 2024-01-20T23:43:03
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently CPU instructions are determined at build time, meaning Ollama needs to target instruction sets that support the largest set of CPUs possible. Instead, CPU instructions should be detected at runtime allowing for both speed and compatibility with older/less powerful CPUs
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1279/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/84
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/84/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/84/comments
|
https://api.github.com/repos/ollama/ollama/issues/84/events
|
https://github.com/ollama/ollama/issues/84
| 1,806,554,526
|
I_kwDOJ0Z1Ps5rrdWe
| 84
|
macOS Intel support
|
{
"login": "michaelthomasclark",
"id": 21243442,
"node_id": "MDQ6VXNlcjIxMjQzNDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/21243442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelthomasclark",
"html_url": "https://github.com/michaelthomasclark",
"followers_url": "https://api.github.com/users/michaelthomasclark/followers",
"following_url": "https://api.github.com/users/michaelthomasclark/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelthomasclark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelthomasclark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelthomasclark/subscriptions",
"organizations_url": "https://api.github.com/users/michaelthomasclark/orgs",
"repos_url": "https://api.github.com/users/michaelthomasclark/repos",
"events_url": "https://api.github.com/users/michaelthomasclark/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelthomasclark/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-07-16T11:58:38
| 2023-07-30T02:49:56
| 2023-07-30T02:49:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Upon unzipping the Ollama download, I'm unable to launch the app. I get the following error: "You can’t open the application “Ollama” because this application is not supported on this Mac."
Mac is a MacBook Pro 15" from summer 2020 (w/ 64GB RAM on board - 32 of which is available)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/84/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/84/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7363
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7363/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7363/comments
|
https://api.github.com/repos/ollama/ollama/issues/7363/events
|
https://github.com/ollama/ollama/issues/7363
| 2,614,944,795
|
I_kwDOJ0Z1Ps6b3OQb
| 7,363
|
Default CPU or GPU for Models
|
{
"login": "realpexian",
"id": 185111145,
"node_id": "U_kgDOCwiSaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/185111145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/realpexian",
"html_url": "https://github.com/realpexian",
"followers_url": "https://api.github.com/users/realpexian/followers",
"following_url": "https://api.github.com/users/realpexian/following{/other_user}",
"gists_url": "https://api.github.com/users/realpexian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/realpexian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/realpexian/subscriptions",
"organizations_url": "https://api.github.com/users/realpexian/orgs",
"repos_url": "https://api.github.com/users/realpexian/repos",
"events_url": "https://api.github.com/users/realpexian/events{/privacy}",
"received_events_url": "https://api.github.com/users/realpexian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-10-25T19:28:55
| 2024-11-02T12:40:13
| 2024-11-02T12:40:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi there,
It would be nice to see a feature that allows users to set default CPU or GPU usage for specific models in Ollama. For instance, a 1.5 code completion model could default to CPU, while larger chatbot models use the GPU.
Additionally, having the option to set specific sleep times for each model would help with resource management.
Thank you
|
{
"login": "realpexian",
"id": 185111145,
"node_id": "U_kgDOCwiSaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/185111145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/realpexian",
"html_url": "https://github.com/realpexian",
"followers_url": "https://api.github.com/users/realpexian/followers",
"following_url": "https://api.github.com/users/realpexian/following{/other_user}",
"gists_url": "https://api.github.com/users/realpexian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/realpexian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/realpexian/subscriptions",
"organizations_url": "https://api.github.com/users/realpexian/orgs",
"repos_url": "https://api.github.com/users/realpexian/repos",
"events_url": "https://api.github.com/users/realpexian/events{/privacy}",
"received_events_url": "https://api.github.com/users/realpexian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7363/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/19
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/19/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/19/comments
|
https://api.github.com/repos/ollama/ollama/issues/19/events
|
https://github.com/ollama/ollama/pull/19
| 1,779,953,958
|
PR_kwDOJ0Z1Ps5UMi1F
| 19
|
remove server extras for now
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-06-29T00:57:55
| 2023-06-29T01:06:15
| 2023-06-29T01:04:37
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/19",
"html_url": "https://github.com/ollama/ollama/pull/19",
"diff_url": "https://github.com/ollama/ollama/pull/19.diff",
"patch_url": "https://github.com/ollama/ollama/pull/19.patch",
"merged_at": "2023-06-29T01:04:37"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/19/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/19/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8643
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8643/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8643/comments
|
https://api.github.com/repos/ollama/ollama/issues/8643/events
|
https://github.com/ollama/ollama/pull/8643
| 2,816,943,711
|
PR_kwDOJ0Z1Ps6JS2-2
| 8,643
|
benchmark: performance of running ollama server
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-28T23:57:54
| 2025-01-30T00:12:47
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8643",
"html_url": "https://github.com/ollama/ollama/pull/8643",
"diff_url": "https://github.com/ollama/ollama/pull/8643.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8643.patch",
"merged_at": null
}
|
This PR introduces a benchmarking framework for measuring Ollama's inference performance across different models and scenarios. The implementation measures Time To First Token (TTFT), total generation time, and tokens per second throughput.
## Key Features
- Measures both cold start and warm start performance
- Tests varying prompt lengths (short/medium/long)
- Collects metrics: TTFT, total time, token count, tokens/second
## Implementation Notes
- Uses external Ollama server (localhost:11434) for testing since C dependencies must be packaged in the binary and cannot be called directly from tests
- Handles model unloading between cold start tests to ensure accurate measurements
- Implements smart warm-up for warm start scenarios
- Aggregates and averages results across test iterations via go benchmark
## Requirements
- Ollama server must be running locally on the default port
- Test models must be pre-downloaded
## Sample Usage
```go
go test -bench=. -m llama3.1:8b ./...
```
The output is a standard go benchmark log with some extra metadata added.
Sample output:
```go
goos: darwin
goarch: arm64
pkg: github.com/ollama/ollama/benchmark
cpu: Apple M3 Max
BenchmarkColdStart/llama3.1:8b/cold/short_prompt-16 1 2800975666 ns/op 0.00 MB/s 58.62 gen_tok/s 100.0 gen_tokens 578.0 load_ms 52.63 prompt_tok/s 14.00 prompt_tokens 848.0 ttft_ms
BenchmarkColdStart/llama3.1:8b/cold/medium_prompt-16 1 10570117834 ns/op 0.00 MB/s 52.65 gen_tok/s 500.0 gen_tokens 573.0 load_ms 59.52 prompt_tok/s 15.00 prompt_tokens 828.0 ttft_ms
BenchmarkColdStart/llama3.1:8b/cold/long_prompt-16 1 19942159833 ns/op 0.00 MB/s 53.17 gen_tok/s 1000 gen_tokens 573.0 load_ms 58.61 prompt_tok/s 16.00 prompt_tokens 848.0 ttft_ms
BenchmarkWarmStart/llama3.1:8b/warm/short_prompt-16 1 1791833416 ns/op 0.00 MB/s 56.82 gen_tok/s 100.0 gen_tokens 12.00 load_ms 823.5 prompt_tok/s 14.00 prompt_tokens 31.00 ttft_ms
BenchmarkWarmStart/llama3.1:8b/warm/medium_prompt-16 1 9783085500 ns/op 0.00 MB/s 51.28 gen_tok/s 500.0 gen_tokens 13.00 load_ms 882.4 prompt_tok/s 15.00 prompt_tokens 32.00 ttft_ms
BenchmarkWarmStart/llama3.1:8b/warm/long_prompt-16 1 21034040166 ns/op 0.00 MB/s 47.63 gen_tok/s 1000 gen_tokens 13.00 load_ms 727.3 prompt_tok/s 16.00 prompt_tokens 37.00 ttft_ms
PASS
ok github.com/ollama/ollama/benchmark 72.374s
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8643/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7617
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7617/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7617/comments
|
https://api.github.com/repos/ollama/ollama/issues/7617/events
|
https://github.com/ollama/ollama/issues/7617
| 2,648,547,562
|
I_kwDOJ0Z1Ps6d3aDq
| 7,617
|
llama3.2-vision
|
{
"login": "tonilampela",
"id": 930866,
"node_id": "MDQ6VXNlcjkzMDg2Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/930866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tonilampela",
"html_url": "https://github.com/tonilampela",
"followers_url": "https://api.github.com/users/tonilampela/followers",
"following_url": "https://api.github.com/users/tonilampela/following{/other_user}",
"gists_url": "https://api.github.com/users/tonilampela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tonilampela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tonilampela/subscriptions",
"organizations_url": "https://api.github.com/users/tonilampela/orgs",
"repos_url": "https://api.github.com/users/tonilampela/repos",
"events_url": "https://api.github.com/users/tonilampela/events{/privacy}",
"received_events_url": "https://api.github.com/users/tonilampela/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-11-11T08:23:46
| 2025-01-04T01:44:38
| 2024-11-11T13:00:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Running llama3.2-vision 11b model currently seem to throw request timed out errors.
BoltAI 1.26.1
ollama 0.4.1
In ollama console output I can see:
```
[GIN] 2024/11/11 - 10:17:38 | 500 | 1m0s | 127.0.0.1 | POST "/v1/chat/completions"
time=2024-11-11T10:17:38.719+02:00 level=WARN source=sched.go:137 msg="mllama doesn't support parallel requests yet"
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.4.1
|
{
"login": "tonilampela",
"id": 930866,
"node_id": "MDQ6VXNlcjkzMDg2Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/930866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tonilampela",
"html_url": "https://github.com/tonilampela",
"followers_url": "https://api.github.com/users/tonilampela/followers",
"following_url": "https://api.github.com/users/tonilampela/following{/other_user}",
"gists_url": "https://api.github.com/users/tonilampela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tonilampela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tonilampela/subscriptions",
"organizations_url": "https://api.github.com/users/tonilampela/orgs",
"repos_url": "https://api.github.com/users/tonilampela/repos",
"events_url": "https://api.github.com/users/tonilampela/events{/privacy}",
"received_events_url": "https://api.github.com/users/tonilampela/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7617/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7564
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7564/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7564/comments
|
https://api.github.com/repos/ollama/ollama/issues/7564/events
|
https://github.com/ollama/ollama/issues/7564
| 2,642,460,671
|
I_kwDOJ0Z1Ps6dgL__
| 7,564
|
Ollama fails to run with ROCm 6.2.2 in Arch packaging
|
{
"login": "kode54",
"id": 796316,
"node_id": "MDQ6VXNlcjc5NjMxNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/796316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kode54",
"html_url": "https://github.com/kode54",
"followers_url": "https://api.github.com/users/kode54/followers",
"following_url": "https://api.github.com/users/kode54/following{/other_user}",
"gists_url": "https://api.github.com/users/kode54/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kode54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kode54/subscriptions",
"organizations_url": "https://api.github.com/users/kode54/orgs",
"repos_url": "https://api.github.com/users/kode54/repos",
"events_url": "https://api.github.com/users/kode54/events{/privacy}",
"received_events_url": "https://api.github.com/users/kode54/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g",
"url": "https://api.github.com/repos/ollama/ollama/labels/build",
"name": "build",
"color": "006b75",
"default": false,
"description": "Issues relating to building ollama from source"
}
] |
closed
| false
| null |
[] | null | 44
| 2024-11-07T23:39:44
| 2024-11-20T19:54:16
| 2024-11-18T07:58:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I figure this is a downstream packaging issue, but this could possibly do with some upstream help. Arch is at 0.3.12, and has recently attempted to package against their ROCm 6.2.2 packages in the testing repositories, which I am working on signing off against as a tester.
The models `llama3.2`, `llama3.1`, `llama3`, and `llama2` all fail to load and run on my RX 7700 XT. At least `llama3.1` is verified to work with this repository's official binaries package, as installed with the installer script to `/usr/local`.
The 3+ series fail with:
```
Error: llama runner process has terminated: CUDA error
```
The 2 model fails with a bit more verbose error:
```
Error: llama runner process has terminated: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR
current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /build/ollama-rocm/src/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:1861
hipblasGemmStridedBatchedEx(ctx.cublas_handle(), HIPBLAS_OP_T, HIPBLAS_OP_N, ne01, ne11, ne10, alpha, (const char *) src0_f16, HIPBLAS_R_16F, nb01/nb00, nb02/nb00, (const char *) src1_f16, HIPBLAS_R_16F, nb11/nb10, nb12/nb10, beta, ( char *) dst_t, cu_data_type, ne01, nb2/nb0, ne12*ne13, cu_compute_type, HIPBLAS_GEMM_DEFAULT)
/build/ollama-rocm/src/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:102: CUDA error
```
I will be attempting to upgrade the Arch package to build ollama 0.4.0, and test further. I'm also reporting this to Arch packaging for `ollama-rocm`, since again, it's likely a downstream packaging issue, but I'm not sure exactly what issue would cause it, be it the particular version of ROCm, or something else with the packaging.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.12
|
{
"login": "kode54",
"id": 796316,
"node_id": "MDQ6VXNlcjc5NjMxNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/796316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kode54",
"html_url": "https://github.com/kode54",
"followers_url": "https://api.github.com/users/kode54/followers",
"following_url": "https://api.github.com/users/kode54/following{/other_user}",
"gists_url": "https://api.github.com/users/kode54/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kode54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kode54/subscriptions",
"organizations_url": "https://api.github.com/users/kode54/orgs",
"repos_url": "https://api.github.com/users/kode54/repos",
"events_url": "https://api.github.com/users/kode54/events{/privacy}",
"received_events_url": "https://api.github.com/users/kode54/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7564/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7564/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5538
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5538/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5538/comments
|
https://api.github.com/repos/ollama/ollama/issues/5538/events
|
https://github.com/ollama/ollama/issues/5538
| 2,394,540,702
|
I_kwDOJ0Z1Ps6Oucqe
| 5,538
|
autogen: Model llama3 is not found
|
{
"login": "jjeejj",
"id": 15176971,
"node_id": "MDQ6VXNlcjE1MTc2OTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15176971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jjeejj",
"html_url": "https://github.com/jjeejj",
"followers_url": "https://api.github.com/users/jjeejj/followers",
"following_url": "https://api.github.com/users/jjeejj/following{/other_user}",
"gists_url": "https://api.github.com/users/jjeejj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jjeejj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jjeejj/subscriptions",
"organizations_url": "https://api.github.com/users/jjeejj/orgs",
"repos_url": "https://api.github.com/users/jjeejj/repos",
"events_url": "https://api.github.com/users/jjeejj/events{/privacy}",
"received_events_url": "https://api.github.com/users/jjeejj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-07-08T03:40:49
| 2024-11-06T01:09:11
| 2024-11-06T01:09:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ref doc: https://ollama.com/blog/openai-compatibility
```python
llm_config = {
"model": "llama3",
"api_key": "ollama",
"base_url": "http://localhost:11434/v1",
}
```
[autogen.oai.client: 07-08 11:33:27] {329} WARNING - Model llama3 is not found.
how to solve it ?
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5538/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5472
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5472/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5472/comments
|
https://api.github.com/repos/ollama/ollama/issues/5472/events
|
https://github.com/ollama/ollama/pull/5472
| 2,389,611,080
|
PR_kwDOJ0Z1Ps50YTwa
| 5,472
|
fix error detection by limiting model loading error parsing
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-03T22:42:59
| 2024-07-04T00:04:31
| 2024-07-04T00:04:30
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5472",
"html_url": "https://github.com/ollama/ollama/pull/5472",
"diff_url": "https://github.com/ollama/ollama/pull/5472.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5472.patch",
"merged_at": "2024-07-04T00:04:30"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5472/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6370
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6370/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6370/comments
|
https://api.github.com/repos/ollama/ollama/issues/6370/events
|
https://github.com/ollama/ollama/issues/6370
| 2,467,888,747
|
I_kwDOJ0Z1Ps6TGP5r
| 6,370
|
Error: llama runner process no longer running: -1
|
{
"login": "josephyuzb",
"id": 14102668,
"node_id": "MDQ6VXNlcjE0MTAyNjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14102668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josephyuzb",
"html_url": "https://github.com/josephyuzb",
"followers_url": "https://api.github.com/users/josephyuzb/followers",
"following_url": "https://api.github.com/users/josephyuzb/following{/other_user}",
"gists_url": "https://api.github.com/users/josephyuzb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josephyuzb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josephyuzb/subscriptions",
"organizations_url": "https://api.github.com/users/josephyuzb/orgs",
"repos_url": "https://api.github.com/users/josephyuzb/repos",
"events_url": "https://api.github.com/users/josephyuzb/events{/privacy}",
"received_events_url": "https://api.github.com/users/josephyuzb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-08-15T11:35:52
| 2024-09-04T00:30:18
| 2024-09-04T00:30:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
root@ai-Default-string:/home/ai# ollama run llama3.1:405b
pulling manifest
pulling 939fd971f038... 100% ▕██████████████████████████████████████████████████████████████████████▏ 228 GB
pulling f000eeb056ec... 100% ▕██████████████████████████████████████████████████████████████████████▏ 1.4 KB
pulling 0ba8f0e314b4... 100% ▕██████████████████████████████████████████████████████████████████████▏ 12 KB
pulling 56bb8bd477a5... 100% ▕██████████████████████████████████████████████████████████████████████▏ 96 B
pulling 02766cd47dfb... 100% ▕██████████████████████████████████████████████████████████████████████▏ 487 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process no longer running: -1
### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
ollama version is 0.0.0
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6370/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3822
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3822/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3822/comments
|
https://api.github.com/repos/ollama/ollama/issues/3822/events
|
https://github.com/ollama/ollama/issues/3822
| 2,256,538,201
|
I_kwDOJ0Z1Ps6GgApZ
| 3,822
|
Add API endpoint to see GPU hardware available
|
{
"login": "parker-research",
"id": 166864283,
"node_id": "U_kgDOCfIlmw",
"avatar_url": "https://avatars.githubusercontent.com/u/166864283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parker-research",
"html_url": "https://github.com/parker-research",
"followers_url": "https://api.github.com/users/parker-research/followers",
"following_url": "https://api.github.com/users/parker-research/following{/other_user}",
"gists_url": "https://api.github.com/users/parker-research/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parker-research/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parker-research/subscriptions",
"organizations_url": "https://api.github.com/users/parker-research/orgs",
"repos_url": "https://api.github.com/users/parker-research/repos",
"events_url": "https://api.github.com/users/parker-research/events{/privacy}",
"received_events_url": "https://api.github.com/users/parker-research/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-04-22T13:25:32
| 2024-10-23T18:43:01
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be helpful if there was an API endpoint to see the hardware details that the Ollama server is running on.
Especially in research/accademic computer clusters, it can be a little bit tough to ensure that the right resources are being used. Having the option to dump those details would help a ton!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3822/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6962
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6962/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6962/comments
|
https://api.github.com/repos/ollama/ollama/issues/6962/events
|
https://github.com/ollama/ollama/pull/6962
| 2,548,739,429
|
PR_kwDOJ0Z1Ps58spwI
| 6,962
|
README: Fix llama3.1 -> llama3.2 typo
|
{
"login": "Xe",
"id": 529003,
"node_id": "MDQ6VXNlcjUyOTAwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/529003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xe",
"html_url": "https://github.com/Xe",
"followers_url": "https://api.github.com/users/Xe/followers",
"following_url": "https://api.github.com/users/Xe/following{/other_user}",
"gists_url": "https://api.github.com/users/Xe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xe/subscriptions",
"organizations_url": "https://api.github.com/users/Xe/orgs",
"repos_url": "https://api.github.com/users/Xe/repos",
"events_url": "https://api.github.com/users/Xe/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-25T18:48:09
| 2024-09-25T18:53:48
| 2024-09-25T18:53:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6962",
"html_url": "https://github.com/ollama/ollama/pull/6962",
"diff_url": "https://github.com/ollama/ollama/pull/6962.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6962.patch",
"merged_at": "2024-09-25T18:53:47"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6962/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3415
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3415/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3415/comments
|
https://api.github.com/repos/ollama/ollama/issues/3415/events
|
https://github.com/ollama/ollama/issues/3415
| 2,216,399,345
|
I_kwDOJ0Z1Ps6EG5Hx
| 3,415
|
Ollama does not use my ram memory
|
{
"login": "faugustdev",
"id": 107934062,
"node_id": "U_kgDOBm7xbg",
"avatar_url": "https://avatars.githubusercontent.com/u/107934062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faugustdev",
"html_url": "https://github.com/faugustdev",
"followers_url": "https://api.github.com/users/faugustdev/followers",
"following_url": "https://api.github.com/users/faugustdev/following{/other_user}",
"gists_url": "https://api.github.com/users/faugustdev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faugustdev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faugustdev/subscriptions",
"organizations_url": "https://api.github.com/users/faugustdev/orgs",
"repos_url": "https://api.github.com/users/faugustdev/repos",
"events_url": "https://api.github.com/users/faugustdev/events{/privacy}",
"received_events_url": "https://api.github.com/users/faugustdev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-03-30T11:31:46
| 2024-04-19T02:42:08
| 2024-04-15T19:30:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
## Ollama Resource Utilization: Potential Optimization Opportunity
I'm deploying a model within Ollama and noticed that while I've allocated 24GB of RAM to the Docker container, it's currently only utilizing 117MB.
This efficient resource usage is commendable, but it might also indicate room for optimization. To ensure optimal performance, it would be beneficial if the model could leverage at least the minimum required resources.
### What did you expect to see?
### Model Response Speed and Resource Usage
While I allocated 24GB of RAM to the Docker container running the model, it's currently utilizing only 117MB. Given this limited resource usage, achieving an acceptable response speed for the model is impossible.
### Steps to reproduce
run this command to start my docker container run -d -v rocama:/root/.ollama -p 11434:11434 --name LLMsDazlabs --memory=24g --memory-reservation=24g rocama/ollama
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
arm64
### Platform
Docker
### Ollama version
ollama version is 0.1.30
### GPU
_No response_
### GPU info
I am not using GPU, I am running Ollama with only CPU
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
### CPU
Intel
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3415/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3736
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3736/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3736/comments
|
https://api.github.com/repos/ollama/ollama/issues/3736/events
|
https://github.com/ollama/ollama/issues/3736
| 2,251,509,689
|
I_kwDOJ0Z1Ps6GM0-5
| 3,736
|
v0.1.32 is running GPU capable models on CPU
|
{
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.com/users/MarkWard0110/followers",
"following_url": "https://api.github.com/users/MarkWard0110/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkWard0110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkWard0110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkWard0110/subscriptions",
"organizations_url": "https://api.github.com/users/MarkWard0110/orgs",
"repos_url": "https://api.github.com/users/MarkWard0110/repos",
"events_url": "https://api.github.com/users/MarkWard0110/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkWard0110/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 38
| 2024-04-18T20:32:57
| 2024-05-01T19:14:07
| 2024-05-01T19:14:07
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I sometimes find that Ollama runs a model that should be on the GPU on the CPU. I just upgraded to v0.1.32. I am still trying to find out how to reproduce the issue. I don't know if it is related to me getting an error when loading one of the new models.
Hardware:
Intel Core i9 14900k
DDR5 6400MHz 2x48GB
Nvidia RTX 4070 TI Super 16GB
I have yet to make it through a successful benchmark run without it doing this.
This is the logs around when it loaded the model in the CPU RAM
```
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.315Z level=INFO source=routes.go:97 msg="changing loaded model"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.380Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.380Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.381Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama1615772994/runners/cuda_v11/libcudart.so.11.0]"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.381Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.381Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.408Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.419Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.419Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.419Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama1615772994/runners/cuda_v11/libcudart.so.11.0]"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.420Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.420Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.439Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.450Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=0 layers=0 required="10961.0 MiB" used="901.1 MiB" available="270.6 MiB" kv="3200.0 MiB" fulloffload="368.0 MiB" partialoffload="444.1 MiB"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.450Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama1615772994/runners/cpu/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-b17551ffad6537e746d58ca02744788b230e7e30d4796976917e6c589518c830 --ctx-size 4096 --batch-size 512 --embedding --log-disable --n-gpu-layers 0 --port 39607"
Apr 18 19:28:59 quorra ollama[1170]: time=2024-04-18T19:28:59.450Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
Apr 18 19:28:59 quorra ollama[567056]: {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"140525997987712","timestamp":1713468539}
Apr 18 19:28:59 quorra ollama[567056]: {"function":"server_params_parse","level":"WARN","line":2380,"msg":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1,"tid":"140525997987712","timestamp":1713468539}
Apr 18 19:28:59 quorra ollama[567056]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"140525997987712","timestamp":1713468539}
Apr 18 19:28:59 quorra ollama[567056]: {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"140525997987712","timestamp":1713468539,"total_threads":32}
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: loaded meta data with 20 key-value pairs and 363 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-b17551ffad6537e746d58ca02744788b230e7e30d4796976917e6c589518c830 (version GGUF V2)
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 0: general.architecture str = llama
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 1: general.name str = LLaMA v2
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 2: llama.context_length u32 = 4096
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 3: llama.embedding_length u32 = 5120
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 4: llama.block_count u32 = 40
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 13824
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 7: llama.attention.head_count u32 = 40
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 40
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 11: general.file_type u32 = 2
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32003] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32003] = [0.000000, 0.000000, 0.000000, 0.0000...
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32003] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - kv 19: general.quantization_version u32 = 2
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - type f32: 81 tensors
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - type q4_0: 281 tensors
Apr 18 19:28:59 quorra ollama[1170]: llama_model_loader: - type q6_K: 1 tensors
Apr 18 19:28:59 quorra ollama[1170]: llm_load_vocab: special tokens definition check successful ( 262/32003 ).
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: format = GGUF V2
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: arch = llama
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: vocab type = SPM
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_vocab = 32003
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_merges = 0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_ctx_train = 4096
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_embd = 5120
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_head = 40
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_head_kv = 40
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_layer = 40
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_rot = 128
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_embd_head_k = 128
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_embd_head_v = 128
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_gqa = 1
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_embd_k_gqa = 5120
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_embd_v_gqa = 5120
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: f_norm_eps = 0.0e+00
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: f_logit_scale = 0.0e+00
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_ff = 13824
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_expert = 0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_expert_used = 0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: causal attn = 1
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: pooling type = 0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: rope type = 0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: rope scaling = linear
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: freq_base_train = 10000.0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: freq_scale_train = 1
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: n_yarn_orig_ctx = 4096
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: rope_finetuned = unknown
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: ssm_d_conv = 0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: ssm_d_inner = 0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: ssm_d_state = 0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: ssm_dt_rank = 0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: model type = 13B
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: model ftype = Q4_0
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: model params = 13.02 B
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: model size = 6.86 GiB (4.53 BPW)
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: general.name = LLaMA v2
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: BOS token = 1 '<s>'
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: EOS token = 2 '</s>'
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: UNK token = 0 '<unk>'
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: PAD token = 0 '<unk>'
Apr 18 19:28:59 quorra ollama[1170]: llm_load_print_meta: LF token = 13 '<0x0A>'
Apr 18 19:28:59 quorra ollama[1170]: llm_load_tensors: ggml ctx size = 0.14 MiB
Apr 18 19:29:01 quorra ollama[1170]: llm_load_tensors: CPU buffer size = 7023.92 MiB
Apr 18 19:29:01 quorra ollama[1170]: ...................................................................................................
Apr 18 19:29:01 quorra ollama[1170]: llama_new_context_with_model: n_ctx = 4096
Apr 18 19:29:01 quorra ollama[1170]: llama_new_context_with_model: n_batch = 512
Apr 18 19:29:01 quorra ollama[1170]: llama_new_context_with_model: n_ubatch = 512
Apr 18 19:29:01 quorra ollama[1170]: llama_new_context_with_model: freq_base = 10000.0
Apr 18 19:29:01 quorra ollama[1170]: llama_new_context_with_model: freq_scale = 1
Apr 18 19:29:02 quorra ollama[1170]: llama_kv_cache_init: CPU KV buffer size = 3200.00 MiB
Apr 18 19:29:02 quorra ollama[1170]: llama_new_context_with_model: KV self size = 3200.00 MiB, K (f16): 1600.00 MiB, V (f16): 1600.00 MiB
Apr 18 19:29:02 quorra ollama[1170]: llama_new_context_with_model: CPU output buffer size = 0.14 MiB
Apr 18 19:29:02 quorra ollama[1170]: llama_new_context_with_model: CPU compute buffer size = 368.01 MiB
Apr 18 19:29:02 quorra ollama[1170]: llama_new_context_with_model: graph nodes = 1286
Apr 18 19:29:02 quorra ollama[1170]: llama_new_context_with_model: graph splits = 1
Apr 18 19:29:03 quorra ollama[567056]: {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"140525997987712","timestamp":1713468543}
Apr 18 19:29:03 quorra ollama[567056]: {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":4096,"slot_id":0,"tid":"140525997987712","timestamp":1713468543}
Apr 18 19:29:03 quorra ollama[567056]: {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"140525997987712","timestamp":1713468543}
Apr 18 19:29:03 quorra ollama[567056]: {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"31","port":"39607","tid":"140525997987712","timestamp":1713468543}
```
The same model when used another time
```
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.159Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.159Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.161Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama1615772994/runners/cuda_v11/libcudart.so.11.0]"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.161Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.161Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.199Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.211Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.211Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.212Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama1615772994/runners/cuda_v11/libcudart.so.11.0]"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.212Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.212Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.236Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.249Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=41 layers=41 required="10961.0 MiB" used="10961.0 MiB" available="15857.2 MiB" kv="3200.0 MiB" fulloffload="368.0 MiB" partialoffload="444.1 MiB"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.249Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.249Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama1615772994/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-b17551ffad6537e746d58ca02744788b230e7e30d4796976917e6c589518c830 --ctx-size 4096 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --port 37295"
Apr 18 19:49:56 quorra ollama[1170]: time=2024-04-18T19:49:56.249Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
Apr 18 19:49:56 quorra ollama[583304]: {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"140087397396480","timestamp":1713469796}
Apr 18 19:49:56 quorra ollama[583304]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"140087397396480","timestamp":1713469796}
Apr 18 19:49:56 quorra ollama[583304]: {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"140087397396480","timestamp":1713469796,"total_threads":32}
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: loaded meta data with 20 key-value pairs and 363 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-b17551ffad6537e746d58ca02744788b230e7e30d4796976917e6c589518c830 (version GGUF V2)
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 0: general.architecture str = llama
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 1: general.name str = LLaMA v2
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 2: llama.context_length u32 = 4096
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 3: llama.embedding_length u32 = 5120
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 4: llama.block_count u32 = 40
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 13824
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 7: llama.attention.head_count u32 = 40
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 40
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 11: general.file_type u32 = 2
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32003] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32003] = [0.000000, 0.000000, 0.000000, 0.0000...
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32003] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - kv 19: general.quantization_version u32 = 2
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - type f32: 81 tensors
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - type q4_0: 281 tensors
Apr 18 19:49:56 quorra ollama[1170]: llama_model_loader: - type q6_K: 1 tensors
Apr 18 19:49:56 quorra ollama[1170]: llm_load_vocab: special tokens definition check successful ( 262/32003 ).
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: format = GGUF V2
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: arch = llama
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: vocab type = SPM
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_vocab = 32003
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_merges = 0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_ctx_train = 4096
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_embd = 5120
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_head = 40
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_head_kv = 40
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_layer = 40
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_rot = 128
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_embd_head_k = 128
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_embd_head_v = 128
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_gqa = 1
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_embd_k_gqa = 5120
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_embd_v_gqa = 5120
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: f_norm_eps = 0.0e+00
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: f_logit_scale = 0.0e+00
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_ff = 13824
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_expert = 0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_expert_used = 0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: causal attn = 1
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: pooling type = 0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: rope type = 0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: rope scaling = linear
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: freq_base_train = 10000.0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: freq_scale_train = 1
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: n_yarn_orig_ctx = 4096
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: rope_finetuned = unknown
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: ssm_d_conv = 0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: ssm_d_inner = 0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: ssm_d_state = 0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: ssm_dt_rank = 0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: model type = 13B
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: model ftype = Q4_0
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: model params = 13.02 B
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: model size = 6.86 GiB (4.53 BPW)
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: general.name = LLaMA v2
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: BOS token = 1 '<s>'
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: EOS token = 2 '</s>'
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: UNK token = 0 '<unk>'
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: PAD token = 0 '<unk>'
Apr 18 19:49:56 quorra ollama[1170]: llm_load_print_meta: LF token = 13 '<0x0A>'
Apr 18 19:49:56 quorra ollama[1170]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
Apr 18 19:49:56 quorra ollama[1170]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Apr 18 19:49:56 quorra ollama[1170]: ggml_cuda_init: found 1 CUDA devices:
Apr 18 19:49:56 quorra ollama[1170]: Device 0: NVIDIA GeForce RTX 4070 Ti SUPER, compute capability 8.9, VMM: yes
Apr 18 19:49:56 quorra ollama[1170]: llm_load_tensors: ggml ctx size = 0.28 MiB
Apr 18 19:49:56 quorra ollama[1170]: llm_load_tensors: offloading 40 repeating layers to GPU
Apr 18 19:49:56 quorra ollama[1170]: llm_load_tensors: offloading non-repeating layers to GPU
Apr 18 19:49:56 quorra ollama[1170]: llm_load_tensors: offloaded 41/41 layers to GPU
Apr 18 19:49:56 quorra ollama[1170]: llm_load_tensors: CPU buffer size = 87.90 MiB
Apr 18 19:49:56 quorra ollama[1170]: llm_load_tensors: CUDA0 buffer size = 6936.02 MiB
Apr 18 19:49:56 quorra ollama[1170]: ...................................................................................................
Apr 18 19:49:56 quorra ollama[1170]: llama_new_context_with_model: n_ctx = 4096
Apr 18 19:49:56 quorra ollama[1170]: llama_new_context_with_model: n_batch = 512
Apr 18 19:49:56 quorra ollama[1170]: llama_new_context_with_model: n_ubatch = 512
Apr 18 19:49:56 quorra ollama[1170]: llama_new_context_with_model: freq_base = 10000.0
Apr 18 19:49:56 quorra ollama[1170]: llama_new_context_with_model: freq_scale = 1
Apr 18 19:49:56 quorra ollama[1170]: llama_kv_cache_init: CUDA0 KV buffer size = 3200.00 MiB
Apr 18 19:49:56 quorra ollama[1170]: llama_new_context_with_model: KV self size = 3200.00 MiB, K (f16): 1600.00 MiB, V (f16): 1600.00 MiB
Apr 18 19:49:57 quorra ollama[1170]: llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB
Apr 18 19:49:57 quorra ollama[1170]: llama_new_context_with_model: CUDA0 compute buffer size = 368.00 MiB
Apr 18 19:49:57 quorra ollama[1170]: llama_new_context_with_model: CUDA_Host compute buffer size = 18.01 MiB
Apr 18 19:49:57 quorra ollama[1170]: llama_new_context_with_model: graph nodes = 1286
Apr 18 19:49:57 quorra ollama[1170]: llama_new_context_with_model: graph splits = 2
Apr 18 19:49:57 quorra ollama[583304]: {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"140087397396480","timestamp":1713469797}
Apr 18 19:49:57 quorra ollama[583304]: {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":4096,"slot_id":0,"tid":"140087397396480","timestamp":1713469797}
Apr 18 19:49:57 quorra ollama[583304]: {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"140087397396480","timestamp":1713469797}
Apr 18 19:49:57 quorra ollama[583304]: {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"31","port":"37295","tid":"140087397396480","timestamp":1713469797}
```
I wonder if this is related. Here is an error I get when I attempt to load DBRX
```
Apr 18 18:57:54 quorra ollama[1170]: time=2024-04-18T18:57:54.713Z level=INFO source=routes.go:97 msg="changing loaded model"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.028Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.028Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.028Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama1615772994/runners/cuda_v11/libcudart.so.11.0]"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.039Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.039Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.128Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.141Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.141Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.142Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama1615772994/runners/cuda_v11/libcudart.so.11.0]"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.142Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.142Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.174Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.186Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=8 layers=8 required="71518.7 MiB" used="14828.9 MiB" available="15857.2 MiB" kv="320.0 MiB" fulloffload="320.0 MiB" partialoffload="320.0 MiB"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.186Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.187Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama1615772994/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-1d12441f19436dbb0bcc4067e9d47921b944ef4a87b35873aa430e85e91a93c8 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 8 --port 41305"
Apr 18 18:57:55 quorra ollama[1170]: time=2024-04-18T18:57:55.187Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
Apr 18 18:57:55 quorra ollama[20191]: {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"140044021682176","timestamp":1713466675}
Apr 18 18:57:55 quorra ollama[20191]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"140044021682176","timestamp":1713466675}
Apr 18 18:57:55 quorra ollama[20191]: {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"140044021682176","timestamp":1713466675,"total_threads":32}
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: loaded meta data with 24 key-value pairs and 323 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-1d12441f19436dbb0bcc4067e9d47921b944ef4a87b35873aa430e85e91a93c8 (version GGUF V3 (latest))
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 0: general.architecture str = dbrx
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 1: general.name str = dbrx
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 2: dbrx.block_count u32 = 40
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 3: dbrx.context_length u32 = 32768
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 4: dbrx.embedding_length u32 = 6144
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 5: dbrx.feed_forward_length u32 = 10752
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 6: dbrx.attention.head_count u32 = 48
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 7: dbrx.attention.head_count_kv u32 = 8
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 8: dbrx.rope.freq_base f32 = 500000.000000
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 9: dbrx.attention.clamp_kqv f32 = 8.000000
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 10: general.file_type u32 = 2
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 11: dbrx.expert_count u32 = 16
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 12: dbrx.expert_used_count u32 = 4
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 13: dbrx.attention.layer_norm_epsilon f32 = 0.000010
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,100352] = ["!", "\"", "#", "$", "%", "&", "'", ...
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,100352] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,100000] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 100257
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 100257
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 100257
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 100277
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 22: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - kv 23: general.quantization_version u32 = 2
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - type f32: 81 tensors
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - type f16: 40 tensors
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - type q4_0: 201 tensors
Apr 18 18:57:55 quorra ollama[1170]: llama_model_loader: - type q6_K: 1 tensors
Apr 18 18:57:55 quorra ollama[1170]: llm_load_vocab: special tokens definition check successful ( 96/100352 ).
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: format = GGUF V3 (latest)
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: arch = dbrx
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: vocab type = BPE
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_vocab = 100352
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_merges = 100000
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_ctx_train = 32768
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_embd = 6144
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_head = 48
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_head_kv = 8
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_layer = 40
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_rot = 128
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_embd_head_k = 128
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_embd_head_v = 128
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_gqa = 6
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_embd_k_gqa = 1024
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_embd_v_gqa = 1024
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: f_norm_eps = 1.0e-05
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: f_clamp_kqv = 8.0e+00
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: f_logit_scale = 0.0e+00
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_ff = 10752
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_expert = 16
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_expert_used = 4
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: causal attn = 1
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: pooling type = 0
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: rope type = 2
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: rope scaling = linear
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: freq_base_train = 500000.0
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: freq_scale_train = 1
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: n_yarn_orig_ctx = 32768
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: rope_finetuned = unknown
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: ssm_d_conv = 0
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: ssm_d_inner = 0
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: ssm_d_state = 0
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: ssm_dt_rank = 0
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: model type = 16x12B
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: model ftype = Q4_0
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: model params = 131.60 B
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: model size = 69.09 GiB (4.51 BPW)
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: general.name = dbrx
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: BOS token = 100257 '<|endoftext|>'
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: EOS token = 100257 '<|endoftext|>'
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: UNK token = 100257 '<|endoftext|>'
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: PAD token = 100277 '<|pad|>'
Apr 18 18:57:55 quorra ollama[1170]: llm_load_print_meta: LF token = 128 'Ä'
Apr 18 18:57:55 quorra ollama[1170]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
Apr 18 18:57:55 quorra ollama[1170]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Apr 18 18:57:55 quorra ollama[1170]: ggml_cuda_init: found 1 CUDA devices:
Apr 18 18:57:55 quorra ollama[1170]: Device 0: NVIDIA GeForce RTX 4070 Ti SUPER, compute capability 8.9, VMM: yes
Apr 18 18:57:55 quorra ollama[1170]: llm_load_tensors: ggml ctx size = 0.74 MiB
Apr 18 18:58:18 quorra ollama[1170]: llm_load_tensors: offloading 8 repeating layers to GPU
Apr 18 18:58:18 quorra ollama[1170]: llm_load_tensors: offloaded 8/41 layers to GPU
Apr 18 18:58:18 quorra ollama[1170]: llm_load_tensors: CPU buffer size = 70752.49 MiB
Apr 18 18:58:18 quorra ollama[1170]: llm_load_tensors: CUDA0 buffer size = 13987.88 MiB
Apr 18 18:58:19 quorra ollama[1170]: ....................................................................................................
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: n_ctx = 2048
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: n_batch = 512
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: n_ubatch = 512
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: freq_base = 500000.0
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: freq_scale = 1
Apr 18 18:58:19 quorra ollama[1170]: llama_kv_cache_init: CUDA_Host KV buffer size = 256.00 MiB
Apr 18 18:58:19 quorra ollama[1170]: llama_kv_cache_init: CUDA0 KV buffer size = 64.00 MiB
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: KV self size = 320.00 MiB, K (f16): 160.00 MiB, V (f16): 160.00 MiB
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: CUDA_Host output buffer size = 0.41 MiB
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: CUDA0 compute buffer size = 1794.00 MiB
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: CUDA_Host compute buffer size = 16.01 MiB
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: graph nodes = 2886
Apr 18 18:58:19 quorra ollama[1170]: llama_new_context_with_model: graph splits = 325
Apr 18 18:58:20 quorra ollama[1170]: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED
Apr 18 18:58:20 quorra ollama[1170]: current device: 0, in function cublas_handle at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda/common.cuh:526
Apr 18 18:58:20 quorra ollama[1170]: cublasCreate_v2(&cublas_handles[device])
Apr 18 18:58:20 quorra ollama[1170]: GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"
Apr 18 18:58:20 quorra ollama[1170]: time=2024-04-18T18:58:20.740Z level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 CUDA error: CUBLAS_STATUS_NOT_INITIALIZED\n current device: 0, in function cublas_handle at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda/common.cuh:526\n cublasCreate_v2(&cublas_handles[device])\nGGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !\"CUDA error\""
Apr 18 18:58:20 quorra ollama[1170]: [GIN] 2024/04/18 - 18:58:20 | 500 | 26.029470037s | 10.0.0.123 | POST "/api/generate"
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3736/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3974
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3974/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3974/comments
|
https://api.github.com/repos/ollama/ollama/issues/3974/events
|
https://github.com/ollama/ollama/issues/3974
| 2,266,876,797
|
I_kwDOJ0Z1Ps6HHct9
| 3,974
|
error loading model architecture: unknown model architecture: 'phi3'
|
{
"login": "sanyuan0704",
"id": 39261479,
"node_id": "MDQ6VXNlcjM5MjYxNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/39261479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanyuan0704",
"html_url": "https://github.com/sanyuan0704",
"followers_url": "https://api.github.com/users/sanyuan0704/followers",
"following_url": "https://api.github.com/users/sanyuan0704/following{/other_user}",
"gists_url": "https://api.github.com/users/sanyuan0704/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanyuan0704/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanyuan0704/subscriptions",
"organizations_url": "https://api.github.com/users/sanyuan0704/orgs",
"repos_url": "https://api.github.com/users/sanyuan0704/repos",
"events_url": "https://api.github.com/users/sanyuan0704/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanyuan0704/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-04-27T06:53:31
| 2024-06-04T06:46:21
| 2024-06-04T06:46:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When i fine-tuned the phi3 and quantize it with latest llama.cpp, i found ollama cannot load the model:

### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3974/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3974/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6449
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6449/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6449/comments
|
https://api.github.com/repos/ollama/ollama/issues/6449/events
|
https://github.com/ollama/ollama/issues/6449
| 2,476,410,072
|
I_kwDOJ0Z1Ps6TmwTY
| 6,449
|
Microsoft Phi-3.5 models
|
{
"login": "animaldomestico",
"id": 175445186,
"node_id": "U_kgDOCnUUwg",
"avatar_url": "https://avatars.githubusercontent.com/u/175445186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/animaldomestico",
"html_url": "https://github.com/animaldomestico",
"followers_url": "https://api.github.com/users/animaldomestico/followers",
"following_url": "https://api.github.com/users/animaldomestico/following{/other_user}",
"gists_url": "https://api.github.com/users/animaldomestico/gists{/gist_id}",
"starred_url": "https://api.github.com/users/animaldomestico/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/animaldomestico/subscriptions",
"organizations_url": "https://api.github.com/users/animaldomestico/orgs",
"repos_url": "https://api.github.com/users/animaldomestico/repos",
"events_url": "https://api.github.com/users/animaldomestico/events{/privacy}",
"received_events_url": "https://api.github.com/users/animaldomestico/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 11
| 2024-08-20T19:40:46
| 2024-11-15T13:08:09
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
- [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)
- [Phi-3.5-MoE-instruct](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct)
- [Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6449/reactions",
"total_count": 80,
"+1": 62,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 17
}
|
https://api.github.com/repos/ollama/ollama/issues/6449/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8401
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8401/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8401/comments
|
https://api.github.com/repos/ollama/ollama/issues/8401/events
|
https://github.com/ollama/ollama/issues/8401
| 2,783,878,264
|
I_kwDOJ0Z1Ps6l7px4
| 8,401
|
Failed to summarize the long context
|
{
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/followers",
"following_url": "https://api.github.com/users/SDAIer/following{/other_user}",
"gists_url": "https://api.github.com/users/SDAIer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SDAIer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SDAIer/subscriptions",
"organizations_url": "https://api.github.com/users/SDAIer/orgs",
"repos_url": "https://api.github.com/users/SDAIer/repos",
"events_url": "https://api.github.com/users/SDAIer/events{/privacy}",
"received_events_url": "https://api.github.com/users/SDAIer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2025-01-13T12:50:21
| 2025-01-18T02:32:02
| 2025-01-18T02:32:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have 4 * A30 GPU cards (24G*4) and a piece of content with 111k context. I used 3 models that can support 128k context, which are
llama3.2:latest
llama3.1:8b
glm4:9b
The models were set with the parameter num_ctx=121k. After testing, none of the models could successfully summarize the context content (if the context content is sufficiently small, all three models can succeed).
Moreover, by monitoring the GPU usage with gpustat -i, it was found that only one model can utilize multiple GPUs for processing, while the other two models can only use one GPU.
Through the ollama logs, it was discovered that almost every model has to repeatedly load the context content, which takes a long time, but ultimately fails, resulting in a poor user experience.
Could you please help analyze the logs to figure out why it always fails?
The logs for the three models are as follows attached..
[glm4.log](https://github.com/user-attachments/files/18396940/glm4.log)
[Llama 3.1 8B Instruct.log](https://github.com/user-attachments/files/18396944/Llama.3.1.8B.Instruct.log)
[Llama 3.2 3B Instruct.log](https://github.com/user-attachments/files/18396945/Llama.3.2.3B.Instruct.log)
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.3.11
|
{
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/followers",
"following_url": "https://api.github.com/users/SDAIer/following{/other_user}",
"gists_url": "https://api.github.com/users/SDAIer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SDAIer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SDAIer/subscriptions",
"organizations_url": "https://api.github.com/users/SDAIer/orgs",
"repos_url": "https://api.github.com/users/SDAIer/repos",
"events_url": "https://api.github.com/users/SDAIer/events{/privacy}",
"received_events_url": "https://api.github.com/users/SDAIer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8401/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8364
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8364/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8364/comments
|
https://api.github.com/repos/ollama/ollama/issues/8364/events
|
https://github.com/ollama/ollama/issues/8364
| 2,777,729,863
|
I_kwDOJ0Z1Ps6lkMtH
| 8,364
|
Add tools support to dolphin3
|
{
"login": "larria",
"id": 1115524,
"node_id": "MDQ6VXNlcjExMTU1MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1115524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/larria",
"html_url": "https://github.com/larria",
"followers_url": "https://api.github.com/users/larria/followers",
"following_url": "https://api.github.com/users/larria/following{/other_user}",
"gists_url": "https://api.github.com/users/larria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/larria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/larria/subscriptions",
"organizations_url": "https://api.github.com/users/larria/orgs",
"repos_url": "https://api.github.com/users/larria/repos",
"events_url": "https://api.github.com/users/larria/events{/privacy}",
"received_events_url": "https://api.github.com/users/larria/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-09T12:40:53
| 2025-01-10T23:59:43
| 2025-01-10T23:59:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Dolphin3 model's creator confirmed that the model does support tools in HG. Maybe ollama's built-in template needs update:
https://huggingface.co/cognitivecomputations/Dolphin3.0-Llama3.1-8B/discussions/2
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8364/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1003
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1003/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1003/comments
|
https://api.github.com/repos/ollama/ollama/issues/1003/events
|
https://github.com/ollama/ollama/issues/1003
| 1,977,467,304
|
I_kwDOJ0Z1Ps513cGo
| 1,003
|
Request: Support Modelfile management API
|
{
"login": "LushVoid",
"id": 149633625,
"node_id": "U_kgDOCOs6WQ",
"avatar_url": "https://avatars.githubusercontent.com/u/149633625?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LushVoid",
"html_url": "https://github.com/LushVoid",
"followers_url": "https://api.github.com/users/LushVoid/followers",
"following_url": "https://api.github.com/users/LushVoid/following{/other_user}",
"gists_url": "https://api.github.com/users/LushVoid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LushVoid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LushVoid/subscriptions",
"organizations_url": "https://api.github.com/users/LushVoid/orgs",
"repos_url": "https://api.github.com/users/LushVoid/repos",
"events_url": "https://api.github.com/users/LushVoid/events{/privacy}",
"received_events_url": "https://api.github.com/users/LushVoid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-11-04T18:56:23
| 2023-12-22T03:57:26
| 2023-12-22T03:57:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Description
As a developer integrating with the Ollama server, I find the current workflow for creating and managing models requires handling Modelfiles locally before uploading. This process can be cumbersome.
### Feature Request
I would like to request an enhancement to the API that would allow for direct editing of Modelfiles through API requests. This would enable more dynamic and automated workflows, particularly for applications that are deployed as browsers.
### Suggested Solution
Implement a new API endpoint (e.g., `PUT /api/model/:name`) that allows users to send a Modelfile's content or updates to an existing model's Modelfile directly in the request body. The server could then apply these changes to the model without requiring the webapp user to manage and upload files manually and/or copy a new file entirely.
**Dynamic Model Management:** This feature would simplify the process of programmatically adjusting models based on application needs or user feedback without the overhead of file management.
Somthing like:
`curl -X PUT http://localhost:11434/api/models/rMario -d '{ "modelfile": "<updated_modelfile_content_here>" }'`
### Conclusion
Providing a means to edit Modelfiles directly via the API would greatly enhance the developer experience and broaden the use cases for the Ollama server. It would also bring the tool in line with contemporary cloud-native practices.
Thank you for considering this request.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1003/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4100
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4100/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4100/comments
|
https://api.github.com/repos/ollama/ollama/issues/4100/events
|
https://github.com/ollama/ollama/issues/4100
| 2,275,512,233
|
I_kwDOJ0Z1Ps6HoY-p
| 4,100
|
Error: do encode request: Post "http://127.0.0.1:39207/tokenize": EOF
|
{
"login": "j2l",
"id": 65325,
"node_id": "MDQ6VXNlcjY1MzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/65325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j2l",
"html_url": "https://github.com/j2l",
"followers_url": "https://api.github.com/users/j2l/followers",
"following_url": "https://api.github.com/users/j2l/following{/other_user}",
"gists_url": "https://api.github.com/users/j2l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/j2l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/j2l/subscriptions",
"organizations_url": "https://api.github.com/users/j2l/orgs",
"repos_url": "https://api.github.com/users/j2l/repos",
"events_url": "https://api.github.com/users/j2l/events{/privacy}",
"received_events_url": "https://api.github.com/users/j2l/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-05-02T13:13:01
| 2024-05-18T17:21:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello,
I downloaded the Q4KM model from https://huggingface.co/LiteLLMs/French-Alpaca-Llama3-8B-Instruct-v1.0-GGUF/tree/main/Q4_K_M
renamed locally to French-Alpaca-Llama3-8B-Instruct-v1.gguf
Modelfile:
`FROM "./French-Alpaca-Llama3-8B-Instruct-v1.gguf"`
ollama create frll3 -f ./Modelfile
```
transferring model data
creating model layer
using already created layer sha256:08941f7a82566ca0116881e211330eae5838c20146132ed8fb9de46b6f5ea54b
writing layer sha256:9f194159c3b80adee4448e1a1d380df743363881d417b5e9841e9611f884c155
writing manifest
success
```
ollama run frll3
`>>> hi`
`Error: do encode request: Post "http://127.0.0.1:39207/tokenize": EOF`
But `ollama run llama3` works fine, I can discuss with it, no error. I have a 3060 (12GB VRAM)
Is it because of the model? The renaming? the Makefile?
Thanks
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4100/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5873
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5873/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5873/comments
|
https://api.github.com/repos/ollama/ollama/issues/5873/events
|
https://github.com/ollama/ollama/pull/5873
| 2,425,043,150
|
PR_kwDOJ0Z1Ps52NQsp
| 5,873
|
Update faq.md
|
{
"login": "Thinkpiet",
"id": 44381886,
"node_id": "MDQ6VXNlcjQ0MzgxODg2",
"avatar_url": "https://avatars.githubusercontent.com/u/44381886?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Thinkpiet",
"html_url": "https://github.com/Thinkpiet",
"followers_url": "https://api.github.com/users/Thinkpiet/followers",
"following_url": "https://api.github.com/users/Thinkpiet/following{/other_user}",
"gists_url": "https://api.github.com/users/Thinkpiet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Thinkpiet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Thinkpiet/subscriptions",
"organizations_url": "https://api.github.com/users/Thinkpiet/orgs",
"repos_url": "https://api.github.com/users/Thinkpiet/repos",
"events_url": "https://api.github.com/users/Thinkpiet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Thinkpiet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-23T12:06:35
| 2024-08-14T16:41:28
| 2024-08-14T16:41:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5873",
"html_url": "https://github.com/ollama/ollama/pull/5873",
"diff_url": "https://github.com/ollama/ollama/pull/5873.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5873.patch",
"merged_at": null
}
|
I tried to find the model storage path in the FAQ, but it was missing: For two of my arch linux based machines, it is "/var/lib/ollama/.ollama/models", not "/usr/share/ollama/.ollama/models". I don't know about other distributions
Added this model storage path to FAQ
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5873/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1167
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1167/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1167/comments
|
https://api.github.com/repos/ollama/ollama/issues/1167/events
|
https://github.com/ollama/ollama/issues/1167
| 1,998,377,298
|
I_kwDOJ0Z1Ps53HNFS
| 1,167
|
Another CUDA error 100 problem on WSL2 with RTX3090
|
{
"login": "samxu29",
"id": 22229980,
"node_id": "MDQ6VXNlcjIyMjI5OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/22229980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samxu29",
"html_url": "https://github.com/samxu29",
"followers_url": "https://api.github.com/users/samxu29/followers",
"following_url": "https://api.github.com/users/samxu29/following{/other_user}",
"gists_url": "https://api.github.com/users/samxu29/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samxu29/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samxu29/subscriptions",
"organizations_url": "https://api.github.com/users/samxu29/orgs",
"repos_url": "https://api.github.com/users/samxu29/repos",
"events_url": "https://api.github.com/users/samxu29/events{/privacy}",
"received_events_url": "https://api.github.com/users/samxu29/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 16
| 2023-11-17T06:36:11
| 2024-05-21T17:54:18
| 2024-05-21T17:54:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Firstly I want to thank you for all the developers, this is an amazing project. Being a noob I am running into some problem I am hoping someone can give me the answer to.
I have a similar strange problem to this issue: https://github.com/jmorganca/ollama/issues/684
Although first time I thought it was my Cuda toolkit problem so I uninstalled the ollama and freshly installed Cuda toolkit then ran the script `curl https://ollama.ai/install.sh | sh` to install ollama again.
But I still get `CUDA error 100 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:5661: no CUDA-capable device is detected current device: 32544`
I had no problem running llama-cpp-python in another project I was working on with `LLAMA_CUBLAS` support, but for some reason for the life of me I can't get ollama running on GPU here:
```
2023/11/17 01:27:31 llama.go:290: 23013 MB VRAM available, loading up to 150 GPU layers
2023/11/17 01:27:31 llama.go:415: starting llama runner
2023/11/17 01:27:31 llama.go:473: waiting for llama runner to start responding
CUDA error 100 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:5661: no CUDA-capable device is detected current device: 32544
2023/11/17 01:27:31 llama.go:430: 100 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:5661: no CUDA-capable device is detected
current device: 32544
2023/11/17 01:27:31 llama.go:438: error starting llama runner: llama runner process has terminated
2023/11/17 01:27:31 llama.go:504: llama runner stopped successfully
2023/11/17 01:27:31 llama.go:415: starting llama runner
2023/11/17 01:27:31 llama.go:473: waiting for llama runner to start responding
{"timestamp":1700202451,"level":"WARNING","function":"server_params_parse","line":871,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1}
{"timestamp":1700202451,"level":"INFO","function":"main","line":1323,"message":"build info","build":219,"commit":"9e70cc0"}
{"timestamp":1700202451,"level":"INFO","function":"main","line":1325,"message":"system info","n_threads":10,"n_threads_batch":-1,"total_threads":20,"system_info":"AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from
```
How would I change `num_gpu`? And are there any flags I need to turn `CUBLAS` on for ollama to utilize the GPU?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1167/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7808
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7808/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7808/comments
|
https://api.github.com/repos/ollama/ollama/issues/7808/events
|
https://github.com/ollama/ollama/issues/7808
| 2,685,389,315
|
I_kwDOJ0Z1Ps6gD8oD
| 7,808
|
AIDC-AI/Marco-o1
|
{
"login": "nonetrix",
"id": 45698918,
"node_id": "MDQ6VXNlcjQ1Njk4OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonetrix",
"html_url": "https://github.com/nonetrix",
"followers_url": "https://api.github.com/users/nonetrix/followers",
"following_url": "https://api.github.com/users/nonetrix/following{/other_user}",
"gists_url": "https://api.github.com/users/nonetrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nonetrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nonetrix/subscriptions",
"organizations_url": "https://api.github.com/users/nonetrix/orgs",
"repos_url": "https://api.github.com/users/nonetrix/repos",
"events_url": "https://api.github.com/users/nonetrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/nonetrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-23T05:23:23
| 2024-11-23T11:34:49
| 2024-11-23T11:34:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
AIDC-AI/Marco-o1 is a clone of o1 reasoning in 7B size, it's based on Qwen so should be easy add
|
{
"login": "nonetrix",
"id": 45698918,
"node_id": "MDQ6VXNlcjQ1Njk4OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonetrix",
"html_url": "https://github.com/nonetrix",
"followers_url": "https://api.github.com/users/nonetrix/followers",
"following_url": "https://api.github.com/users/nonetrix/following{/other_user}",
"gists_url": "https://api.github.com/users/nonetrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nonetrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nonetrix/subscriptions",
"organizations_url": "https://api.github.com/users/nonetrix/orgs",
"repos_url": "https://api.github.com/users/nonetrix/repos",
"events_url": "https://api.github.com/users/nonetrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/nonetrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7808/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4890
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4890/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4890/comments
|
https://api.github.com/repos/ollama/ollama/issues/4890/events
|
https://github.com/ollama/ollama/issues/4890
| 2,339,449,436
|
I_kwDOJ0Z1Ps6LcSpc
| 4,890
|
qwen2 not run correctly
|
{
"login": "FreemanFeng",
"id": 1662126,
"node_id": "MDQ6VXNlcjE2NjIxMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1662126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FreemanFeng",
"html_url": "https://github.com/FreemanFeng",
"followers_url": "https://api.github.com/users/FreemanFeng/followers",
"following_url": "https://api.github.com/users/FreemanFeng/following{/other_user}",
"gists_url": "https://api.github.com/users/FreemanFeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FreemanFeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FreemanFeng/subscriptions",
"organizations_url": "https://api.github.com/users/FreemanFeng/orgs",
"repos_url": "https://api.github.com/users/FreemanFeng/repos",
"events_url": "https://api.github.com/users/FreemanFeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/FreemanFeng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 23
| 2024-06-07T02:14:28
| 2024-06-07T22:57:58
| 2024-06-07T22:57:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama run qwen2
>>> tell a story
GG
OnceGGG thereGGGGGGGGGGGG wasGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
>>> 讲一个故事
GG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
>>>
Above is what qwen2 model response, not sure if it is ollama not supporting qwen2 yet or not?
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.41
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4890/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/4890/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7238
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7238/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7238/comments
|
https://api.github.com/repos/ollama/ollama/issues/7238/events
|
https://github.com/ollama/ollama/issues/7238
| 2,594,382,133
|
I_kwDOJ0Z1Ps6aoyE1
| 7,238
|
Ollama document intelligence engine
|
{
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasota/followers",
"following_url": "https://api.github.com/users/dcasota/following{/other_user}",
"gists_url": "https://api.github.com/users/dcasota/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcasota/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcasota/subscriptions",
"organizations_url": "https://api.github.com/users/dcasota/orgs",
"repos_url": "https://api.github.com/users/dcasota/repos",
"events_url": "https://api.github.com/users/dcasota/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcasota/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-10-17T11:03:03
| 2024-10-24T18:35:05
| 2024-10-24T18:06:56
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Has anyone Ollama examples for document intelligence?
In comparison, Azure AI comes with an engine which extracts elements from documents and makes them eligible passing the data into customizeable workflows. However, there are culprits even in the payed tier.
Azure AI Document Intelligence requirements
- Microsoft Office files, or
- JPEG, PNG, BMP, TIFF, or PDF format
- PDF documents must have dimensions less than 17 x 17 inches or A3 paper size.
- PDF documents must not be protected with a password.
- Images must have dimensions between 50 x 50 pixels and 10,000 x 10,000 pixels.
- file size less than 500 MB (payed standard tier)
- only the first 2000 pages are analyzed (payed standard tier)
Some information about [supported languages](https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/language-support/ocr?view=doc-intel-4.0.0&preserve-view=true&tabs=read-print%2Clayout-print%2Cgeneral)
Which Ollama examples fit to this extraction+passing-to-external-workflow demand?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7238/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2613
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2613/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2613/comments
|
https://api.github.com/repos/ollama/ollama/issues/2613/events
|
https://github.com/ollama/ollama/issues/2613
| 2,144,239,725
|
I_kwDOJ0Z1Ps5_zoBt
| 2,613
|
Slow download speed on windows
|
{
"login": "bcllcc",
"id": 73450286,
"node_id": "MDQ6VXNlcjczNDUwMjg2",
"avatar_url": "https://avatars.githubusercontent.com/u/73450286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bcllcc",
"html_url": "https://github.com/bcllcc",
"followers_url": "https://api.github.com/users/bcllcc/followers",
"following_url": "https://api.github.com/users/bcllcc/following{/other_user}",
"gists_url": "https://api.github.com/users/bcllcc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bcllcc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bcllcc/subscriptions",
"organizations_url": "https://api.github.com/users/bcllcc/orgs",
"repos_url": "https://api.github.com/users/bcllcc/repos",
"events_url": "https://api.github.com/users/bcllcc/events{/privacy}",
"received_events_url": "https://api.github.com/users/bcllcc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 2
| 2024-02-20T12:14:07
| 2024-03-11T21:04:30
| 2024-03-11T21:04:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

|
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyeva/followers",
"following_url": "https://api.github.com/users/hoyyeva/following{/other_user}",
"gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions",
"organizations_url": "https://api.github.com/users/hoyyeva/orgs",
"repos_url": "https://api.github.com/users/hoyyeva/repos",
"events_url": "https://api.github.com/users/hoyyeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoyyeva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2613/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2515
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2515/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2515/comments
|
https://api.github.com/repos/ollama/ollama/issues/2515/events
|
https://github.com/ollama/ollama/issues/2515
| 2,136,970,821
|
I_kwDOJ0Z1Ps5_X5ZF
| 2,515
|
How to run a Pytorch model with ollama?
|
{
"login": "PriyaranjanMarathe",
"id": 120328993,
"node_id": "U_kgDOBywTIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/120328993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PriyaranjanMarathe",
"html_url": "https://github.com/PriyaranjanMarathe",
"followers_url": "https://api.github.com/users/PriyaranjanMarathe/followers",
"following_url": "https://api.github.com/users/PriyaranjanMarathe/following{/other_user}",
"gists_url": "https://api.github.com/users/PriyaranjanMarathe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PriyaranjanMarathe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PriyaranjanMarathe/subscriptions",
"organizations_url": "https://api.github.com/users/PriyaranjanMarathe/orgs",
"repos_url": "https://api.github.com/users/PriyaranjanMarathe/repos",
"events_url": "https://api.github.com/users/PriyaranjanMarathe/events{/privacy}",
"received_events_url": "https://api.github.com/users/PriyaranjanMarathe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2024-02-15T16:34:21
| 2024-04-09T00:16:08
| 2024-02-18T06:47:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Does ollama support loading a Pytorch model? I have trained a model and it's output is a .pt file. How do I use it with ollama? I tried doing the following and it doesn't seem to work.
[root@ trained_models]# ollama run model.pt
pulling manifest
Error: pull model manifest: file does not exist
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2515/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6267
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6267/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6267/comments
|
https://api.github.com/repos/ollama/ollama/issues/6267/events
|
https://github.com/ollama/ollama/issues/6267
| 2,456,821,282
|
I_kwDOJ0Z1Ps6ScB4i
| 6,267
|
add openbmb MiniCPM-V-2_6
|
{
"login": "insinfo",
"id": 12227024,
"node_id": "MDQ6VXNlcjEyMjI3MDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/12227024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/insinfo",
"html_url": "https://github.com/insinfo",
"followers_url": "https://api.github.com/users/insinfo/followers",
"following_url": "https://api.github.com/users/insinfo/following{/other_user}",
"gists_url": "https://api.github.com/users/insinfo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/insinfo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/insinfo/subscriptions",
"organizations_url": "https://api.github.com/users/insinfo/orgs",
"repos_url": "https://api.github.com/users/insinfo/repos",
"events_url": "https://api.github.com/users/insinfo/events{/privacy}",
"received_events_url": "https://api.github.com/users/insinfo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 20
| 2024-08-08T23:14:34
| 2024-09-14T00:43:50
| 2024-09-01T23:46:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
add openbmb MiniCPM-V-2_6
https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf/tree/main
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6267/reactions",
"total_count": 9,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/ollama/ollama/issues/6267/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5288
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5288/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5288/comments
|
https://api.github.com/repos/ollama/ollama/issues/5288/events
|
https://github.com/ollama/ollama/issues/5288
| 2,374,021,894
|
I_kwDOJ0Z1Ps6NgLMG
| 5,288
|
ollama cannot running
|
{
"login": "liliang-cn",
"id": 20553741,
"node_id": "MDQ6VXNlcjIwNTUzNzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/20553741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liliang-cn",
"html_url": "https://github.com/liliang-cn",
"followers_url": "https://api.github.com/users/liliang-cn/followers",
"following_url": "https://api.github.com/users/liliang-cn/following{/other_user}",
"gists_url": "https://api.github.com/users/liliang-cn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liliang-cn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liliang-cn/subscriptions",
"organizations_url": "https://api.github.com/users/liliang-cn/orgs",
"repos_url": "https://api.github.com/users/liliang-cn/repos",
"events_url": "https://api.github.com/users/liliang-cn/events{/privacy}",
"received_events_url": "https://api.github.com/users/liliang-cn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-06-26T02:10:49
| 2024-06-26T16:24:39
| 2024-06-26T16:24:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama list
Error: something went wrong, please see the ollama server logs for details
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.46
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5288/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2333
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2333/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2333/comments
|
https://api.github.com/repos/ollama/ollama/issues/2333/events
|
https://github.com/ollama/ollama/issues/2333
| 2,115,794,326
|
I_kwDOJ0Z1Ps5-HHWW
| 2,333
|
Vulkan Build
|
{
"login": "MichaelFomenko",
"id": 12229584,
"node_id": "MDQ6VXNlcjEyMjI5NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/12229584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelFomenko",
"html_url": "https://github.com/MichaelFomenko",
"followers_url": "https://api.github.com/users/MichaelFomenko/followers",
"following_url": "https://api.github.com/users/MichaelFomenko/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelFomenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelFomenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelFomenko/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelFomenko/orgs",
"repos_url": "https://api.github.com/users/MichaelFomenko/repos",
"events_url": "https://api.github.com/users/MichaelFomenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelFomenko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-02T21:00:11
| 2024-02-03T00:23:50
| 2024-02-03T00:23:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How can I Build the Vulkan variant of llama.cpp
Or how can I integrate the compiled llama.cpp files with Vulkan support into ollama?
|
{
"login": "MichaelFomenko",
"id": 12229584,
"node_id": "MDQ6VXNlcjEyMjI5NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/12229584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelFomenko",
"html_url": "https://github.com/MichaelFomenko",
"followers_url": "https://api.github.com/users/MichaelFomenko/followers",
"following_url": "https://api.github.com/users/MichaelFomenko/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelFomenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelFomenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelFomenko/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelFomenko/orgs",
"repos_url": "https://api.github.com/users/MichaelFomenko/repos",
"events_url": "https://api.github.com/users/MichaelFomenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelFomenko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2333/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/1348
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1348/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1348/comments
|
https://api.github.com/repos/ollama/ollama/issues/1348/events
|
https://github.com/ollama/ollama/issues/1348
| 2,021,623,480
|
I_kwDOJ0Z1Ps54f4a4
| 1,348
|
`deepseek-coder` fails to run with error
|
{
"login": "Huge",
"id": 111648,
"node_id": "MDQ6VXNlcjExMTY0OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/111648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Huge",
"html_url": "https://github.com/Huge",
"followers_url": "https://api.github.com/users/Huge/followers",
"following_url": "https://api.github.com/users/Huge/following{/other_user}",
"gists_url": "https://api.github.com/users/Huge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Huge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Huge/subscriptions",
"organizations_url": "https://api.github.com/users/Huge/orgs",
"repos_url": "https://api.github.com/users/Huge/repos",
"events_url": "https://api.github.com/users/Huge/events{/privacy}",
"received_events_url": "https://api.github.com/users/Huge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2023-12-01T21:43:49
| 2024-04-21T21:43:47
| 2024-02-20T01:24:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
` ollama run deepseek-coder:6.7b-base-q3_K_S`
`Error: llama runner process has terminated`
is worse than expected, I've supposed correct ggml/guff engine/library is bundled to the model in this packaging.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1348/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7146
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7146/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7146/comments
|
https://api.github.com/repos/ollama/ollama/issues/7146/events
|
https://github.com/ollama/ollama/issues/7146
| 2,574,752,160
|
I_kwDOJ0Z1Ps6Zd5mg
| 7,146
|
Unable to recognize the long text content.
|
{
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/followers",
"following_url": "https://api.github.com/users/SDAIer/following{/other_user}",
"gists_url": "https://api.github.com/users/SDAIer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SDAIer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SDAIer/subscriptions",
"organizations_url": "https://api.github.com/users/SDAIer/orgs",
"repos_url": "https://api.github.com/users/SDAIer/repos",
"events_url": "https://api.github.com/users/SDAIer/events{/privacy}",
"received_events_url": "https://api.github.com/users/SDAIer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 20
| 2024-10-09T04:50:32
| 2024-11-17T14:37:04
| 2024-11-17T14:37:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have 4 GPU cards and which card is 24G

It's ok for recognizing short text content, but failed to recognize the long text content.
### Model is qwen2.5:32b and ctx-size is setted to 30001 for handeling the long content, as followed
10月 09 12:37:41 gpu ollama[40766]: time=2024-10-09T12:37:41.871+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama273555 6946/runners/cuda_v12/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-eabc98a9bcbfce7fd70f3e07de599f8fda98120fefed5881934161ede 8bd1a41 --ctx-size 30001 --batch-size 512 --embedding --log-disable --n-gpu-layers 65 --parallel 1 --tensor-split 17,16,16,16 --port 41781"
### AI debug information as followed.


#### ollama debug logs
10月 09 12:37:39 gpu ollama[40766]: time=2024-10-09T12:37:39.452+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx]"
10月 09 12:37:39 gpu ollama[40766]: time=2024-10-09T12:37:39.453+08:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
10月 09 12:37:40 gpu ollama[40766]: time=2024-10-09T12:37:40.723+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-ad4cba93-ee35-2ea2-dba7-7b5772a098ce library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A30" total="23.5 GiB" available="14.7 GiB"
10月 09 12:37:40 gpu ollama[40766]: time=2024-10-09T12:37:40.723+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-6b83f2f6-dc65-7feb-5e02-0cd0087995e8 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A30" total="23.5 GiB" available="16.8 GiB"
10月 09 12:37:40 gpu ollama[40766]: time=2024-10-09T12:37:40.723+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-ac079011-c45b-de29-f2e2-71b2e5d2d7f4 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A30" total="23.5 GiB" available="19.4 GiB"
10月 09 12:37:40 gpu ollama[40766]: time=2024-10-09T12:37:40.723+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-1a5993d8-1f60-3ecd-b80f-55ca9f1e95d2 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A30" total="23.5 GiB" available="20.0 GiB"
10月 09 12:37:41 gpu ollama[40766]: time=2024-10-09T12:37:41.854+08:00 level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-eabc98a9bcbfce7fd70f3e07de599f8fda98120fefed5881934161ede8bd1a41 library=cuda parallel=1 required="40.6 GiB"
10月 09 12:37:41 gpu ollama[40766]: time=2024-10-09T12:37:41.854+08:00 level=INFO source=server.go:103 msg="system memory" total="125.4 GiB" free="112.7 GiB" free_swap="3.7 GiB"
10月 09 12:37:41 gpu ollama[40766]: time=2024-10-09T12:37:41.858+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=65 layers.offload=65 layers.split=17,16,16,16 memory.available="[20.0 GiB 19.4 GiB 16.8 GiB 14.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="40.6 GiB" memory.required.partial="40.6 GiB" memory.required.kv="7.3 GiB" memory.required.allocations="[10.6 GiB 10.0 GiB 10.0 GiB 10.0 GiB]" memory.weights.total="24.8 GiB" memory.weights.repeating="24.2 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="2.9 GiB" memory.graph.partial="2.9 GiB"
10月 09 12:37:41 gpu ollama[40766]: time=2024-10-09T12:37:41.871+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama2735556946/runners/cuda_v12/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-eabc98a9bcbfce7fd70f3e07de599f8fda98120fefed5881934161ede8bd1a41 --ctx-size 30001 --batch-size 512 --embedding --log-disable --n-gpu-layers 65 --parallel 1 --tensor-split 17,16,16,16 --port 41781"
10月 09 12:37:41 gpu ollama[40766]: time=2024-10-09T12:37:41.872+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
10月 09 12:37:41 gpu ollama[40766]: time=2024-10-09T12:37:41.872+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
10月 09 12:37:41 gpu ollama[40766]: time=2024-10-09T12:37:41.873+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
10月 09 12:37:41 gpu ollama[40766]: INFO [main] build info | build=10 commit="9225b05" tid="140652491755520" timestamp=1728448661
10月 09 12:37:41 gpu ollama[40766]: INFO [main] system info | n_threads=32 n_threads_batch=32 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140652491755520" timestamp=1728448661 total_threads=64
10月 09 12:37:41 gpu ollama[40766]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="63" port="41781" tid="140652491755520" timestamp=1728448661
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: loaded meta data with 34 key-value pairs and 771 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-eabc98a9bcbfce7fd70f3e07de599f8fda98120fefed5881934161ede8bd1a41 (version GGUF V3 (latest))
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 0: general.architecture str = qwen2
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 1: general.type str = model
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 2: general.name str = Qwen2.5 32B Instruct
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 3: general.finetune str = Instruct
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 4: general.basename str = Qwen2.5
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 5: general.size_label str = 32B
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 6: general.license str = apache-2.0
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-3...
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 8: general.base_model.count u32 = 1
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 32B
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 14: qwen2.block_count u32 = 64
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648
10月 09 12:37:41 gpu ollama[40766]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 22: general.file_type u32 = 15
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
10月 09 12:37:42 gpu ollama[40766]: time=2024-10-09T12:37:42.126+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - kv 33: general.quantization_version u32 = 2
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - type f32: 321 tensors
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - type q4_K: 385 tensors
10月 09 12:37:42 gpu ollama[40766]: llama_model_loader: - type q6_K: 65 tensors
10月 09 12:37:42 gpu ollama[40766]: llm_load_vocab: special tokens cache size = 22
10月 09 12:37:42 gpu ollama[40766]: llm_load_vocab: token to piece cache size = 0.9310 MB
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: format = GGUF V3 (latest)
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: arch = qwen2
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: vocab type = BPE
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_vocab = 152064
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_merges = 151387
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: vocab_only = 0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_ctx_train = 32768
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_embd = 5120
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_layer = 64
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_head = 40
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_head_kv = 8
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_rot = 128
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_swa = 0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_embd_head_k = 128
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_embd_head_v = 128
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_gqa = 5
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_embd_k_gqa = 1024
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_embd_v_gqa = 1024
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: f_norm_eps = 0.0e+00
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: f_logit_scale = 0.0e+00
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_ff = 27648
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_expert = 0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_expert_used = 0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: causal attn = 1
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: pooling type = 0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: rope type = 2
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: rope scaling = linear
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: freq_base_train = 1000000.0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: freq_scale_train = 1
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: n_ctx_orig_yarn = 32768
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: rope_finetuned = unknown
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: ssm_d_conv = 0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: ssm_d_inner = 0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: ssm_d_state = 0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: ssm_dt_rank = 0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: ssm_dt_b_c_rms = 0
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: model type = ?B
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: model ftype = Q4_K - Medium
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: model params = 32.76 B
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: model size = 18.48 GiB (4.85 BPW)
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: general.name = Qwen2.5 32B Instruct
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: EOS token = 151645 '<|im_end|>'
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: LF token = 148848 'ÄĬ'
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: EOT token = 151645 '<|im_end|>'
10月 09 12:37:42 gpu ollama[40766]: llm_load_print_meta: max token length = 256
10月 09 12:37:42 gpu ollama[40766]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
10月 09 12:37:42 gpu ollama[40766]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
10月 09 12:37:42 gpu ollama[40766]: ggml_cuda_init: found 4 CUDA devices:
10月 09 12:37:42 gpu ollama[40766]: Device 0: NVIDIA A30, compute capability 8.0, VMM: yes
10月 09 12:37:42 gpu ollama[40766]: Device 1: NVIDIA A30, compute capability 8.0, VMM: yes
10月 09 12:37:42 gpu ollama[40766]: Device 2: NVIDIA A30, compute capability 8.0, VMM: yes
10月 09 12:37:42 gpu ollama[40766]: Device 3: NVIDIA A30, compute capability 8.0, VMM: yes
10月 09 12:37:42 gpu ollama[40766]: llm_load_tensors: ggml ctx size = 1.69 MiB
10月 09 12:37:43 gpu ollama[40766]: time=2024-10-09T12:37:43.583+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
10月 09 12:37:45 gpu ollama[40766]: time=2024-10-09T12:37:45.233+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
10月 09 12:37:46 gpu ollama[40766]: llm_load_tensors: offloading 64 repeating layers to GPU
10月 09 12:37:46 gpu ollama[40766]: llm_load_tensors: offloading non-repeating layers to GPU
10月 09 12:37:46 gpu ollama[40766]: llm_load_tensors: offloaded 65/65 layers to GPU
10月 09 12:37:46 gpu ollama[40766]: llm_load_tensors: CPU buffer size = 417.66 MiB
10月 09 12:37:46 gpu ollama[40766]: llm_load_tensors: CUDA0 buffer size = 4844.72 MiB
10月 09 12:37:46 gpu ollama[40766]: llm_load_tensors: CUDA1 buffer size = 4366.53 MiB
10月 09 12:37:46 gpu ollama[40766]: llm_load_tensors: CUDA2 buffer size = 4366.53 MiB
10月 09 12:37:46 gpu ollama[40766]: llm_load_tensors: CUDA3 buffer size = 4930.57 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: n_ctx = 30016
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: n_batch = 512
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: n_ubatch = 512
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: flash_attn = 0
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: freq_base = 1000000.0
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: freq_scale = 1
10月 09 12:37:49 gpu ollama[40766]: llama_kv_cache_init: CUDA0 KV buffer size = 1993.25 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_kv_cache_init: CUDA1 KV buffer size = 1876.00 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_kv_cache_init: CUDA2 KV buffer size = 1876.00 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_kv_cache_init: CUDA3 KV buffer size = 1758.75 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: KV self size = 7504.00 MiB, K (f16): 3752.00 MiB, V (f16): 3752.00 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: CUDA_Host output buffer size = 0.60 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: CUDA0 compute buffer size = 2659.51 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: CUDA1 compute buffer size = 2659.51 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: CUDA2 compute buffer size = 2659.51 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: CUDA3 compute buffer size = 2659.52 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: CUDA_Host compute buffer size = 244.52 MiB
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: graph nodes = 2246
10月 09 12:37:49 gpu ollama[40766]: llama_new_context_with_model: graph splits = 5
10月 09 12:37:49 gpu ollama[40766]: INFO [main] model loaded | tid="140652491755520" timestamp=1728448669
10月 09 12:37:49 gpu ollama[40766]: time=2024-10-09T12:37:49.646+08:00 level=INFO source=server.go:626 msg="llama runner started in 7.77 seconds"
10月 09 12:37:51 gpu ollama[40766]: [GIN] 2024/10/09 - 12:37:51 | 200 | 10.947908166s | 172.22.1.39 | POST "/api/chat"
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.3.11
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7146/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8489
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8489/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8489/comments
|
https://api.github.com/repos/ollama/ollama/issues/8489/events
|
https://github.com/ollama/ollama/issues/8489
| 2,797,755,322
|
I_kwDOJ0Z1Ps6mwlu6
| 8,489
|
A new, much more relaxed way to use the Models offered by Ollama ( Sauraya )
|
{
"login": "Donadev56",
"id": 187584328,
"node_id": "U_kgDOCy5PSA",
"avatar_url": "https://avatars.githubusercontent.com/u/187584328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Donadev56",
"html_url": "https://github.com/Donadev56",
"followers_url": "https://api.github.com/users/Donadev56/followers",
"following_url": "https://api.github.com/users/Donadev56/following{/other_user}",
"gists_url": "https://api.github.com/users/Donadev56/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Donadev56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Donadev56/subscriptions",
"organizations_url": "https://api.github.com/users/Donadev56/orgs",
"repos_url": "https://api.github.com/users/Donadev56/repos",
"events_url": "https://api.github.com/users/Donadev56/events{/privacy}",
"received_events_url": "https://api.github.com/users/Donadev56/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-19T18:04:59
| 2025-01-19T18:06:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# Sauraya AI (Ollama Mobile App )


## Introduction
Sauraya is a modern user interface designed to interact with open-source AI models such as Meta's LLama. This application is fully open-source and free, created to provide the benefits of Meta's AI products to a broader audience without the need for setting up resource-intensive local environments. With Sauraya, users simply download the app from [Sauraya.com](https://sauraya.com), install it, and start engaging with Meta's advanced AI models.
Sauraya is free and maintained by the **Opennode Team** ([opennode.tech](https://opennode.tech)).
## Privacy-Focused and Local Storage
Sauraya prioritizes user privacy by storing all conversations locally in an encrypted file accessible on Android at:
```
/android/com.sauraya/database/user-id.sauraya.crypt
```
The conversation file is a lightweight JSON, ensuring minimal storage usage. Upon app launch, file system access will be required.
Sauraya does not collect any user data, focusing solely on confidentiality. Users can perform various actions through the website, such as:
- Executing Python code
- Listening to AI-generated text via text-to-speech
- Renaming conversation titles
- And much more, with future updates to come.
The team works tirelessly to maintain and improve the application.
## Open Source Code
The source code, written in Dart with Flutter, is available on GitHub:
[https://github.com/Donadev56/sauraya-dart](https://github.com/Donadev56/sauraya-dart)
### License
**MIT License**
## Developer
**Devoue-Li-Tchibeni Dona Dieu Talliane**
GitHub: [https://github.com/Donadev56](https://github.com/Donadev56)
Skills: JavaScript, TypeScript, React Native, Next.js, Node.js, Dart, Flutter, Solidity, and more.
## Acknowledgments
We express our gratitude to:
- **Meta** for their LLama 3.2 and LLama 3.1 models, which made Sauraya possible.
- **Ollama.com** for providing well-documented APIs to interact with AI models.
- **The Flutter and Dart teams** for the language and framework enabling the creation of this application.
## Feedback
We welcome all suggestions for improving the project.
---
Best regards,
**The Opennode Team**

Website: [opennode.tech](https://opennode.tech)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8489/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2831
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2831/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2831/comments
|
https://api.github.com/repos/ollama/ollama/issues/2831/events
|
https://github.com/ollama/ollama/issues/2831
| 2,161,138,314
|
I_kwDOJ0Z1Ps6A0FqK
| 2,831
|
Windows: connection forcibly closes when adding image to llava prompt - CUDA out of memory
|
{
"login": "jakobhoeg",
"id": 114422072,
"node_id": "U_kgDOBtHxOA",
"avatar_url": "https://avatars.githubusercontent.com/u/114422072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakobhoeg",
"html_url": "https://github.com/jakobhoeg",
"followers_url": "https://api.github.com/users/jakobhoeg/followers",
"following_url": "https://api.github.com/users/jakobhoeg/following{/other_user}",
"gists_url": "https://api.github.com/users/jakobhoeg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakobhoeg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakobhoeg/subscriptions",
"organizations_url": "https://api.github.com/users/jakobhoeg/orgs",
"repos_url": "https://api.github.com/users/jakobhoeg/repos",
"events_url": "https://api.github.com/users/jakobhoeg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakobhoeg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-02-29T12:25:14
| 2024-06-04T12:11:30
| 2024-06-04T12:11:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm trying to use llava to identify a photo and it gives this error:
```
>>> What is in this image? /users/jakob/desktop/jakob.jpg
Added image '/users/jakob/desktop/jakob.jpg'
Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:55783->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.
```
This is the server.log:
```time=2024-02-29T13:19:37.052+01:00 level=INFO source=dyn_ext_server.go:171 msg="loaded 1 images"
CUDA error: out of memory
clip_model_load: model name: openai/clip-vit-large-patch14-336
clip_model_load: description: image encoder for LLaVA
clip_model_load: GGUF version: 3
clip_model_load: alignment: 32
clip_model_load: n_tensors: 377
clip_model_load: n_kv: 19
clip_model_load: ftype: f16
clip_model_load: loaded meta data with 19 key-value pairs and 377 tensors from C:\Users\jakob\.ollama\models\blobs\sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539
clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
clip_model_load: - kv 0: general.architecture str = clip
clip_model_load: - kv 1: clip.has_text_encoder bool = false
clip_model_load: - kv 2: clip.has_vision_encoder bool = true
clip_model_load: - kv 3: clip.has_llava_projector bool = true
clip_model_load: - kv 4: general.file_type u32 = 1
clip_model_load: - kv 5: general.name str = openai/clip-vit-large-patch14-336
clip_model_load: - kv 6: general.description str = image encoder for LLaVA
clip_model_load: - kv 7: clip.projector_type str = mlp
clip_model_load: - kv 8: clip.vision.image_size u32 = 336
clip_model_load: - kv 9: clip.vision.patch_size u32 = 14
clip_model_load: - kv 10: clip.vision.embedding_length u32 = 1024
clip_model_load: - kv 11: clip.vision.feed_forward_length u32 = 4096
clip_model_load: - kv 12: clip.vision.projection_dim u32 = 768
clip_model_load: - kv 13: clip.vision.attention.head_count u32 = 16
clip_model_load: - kv 14: clip.vision.attention.layer_norm_epsilon f32 = 0.000010
clip_model_load: - kv 15: clip.vision.block_count u32 = 23
clip_model_load: - kv 16: clip.vision.image_mean arr[f32,3] = [0.481455, 0.457828, 0.408211]
clip_model_load: - kv 17: clip.vision.image_std arr[f32,3] = [0.268630, 0.261303, 0.275777]
clip_model_load: - kv 18: clip.use_gelu bool = false
clip_model_load: - type f32: 235 tensors
clip_model_load: - type f16: 142 tensors
clip_model_load: CLIP using CUDA backend
clip_model_load: text_encoder: 0
clip_model_load: vision_encoder: 1
clip_model_load: llava_projector: 1
clip_model_load: model size: 595.49 MB
clip_model_load: metadata size: 0.14 MB
clip_model_load: params backend buffer size = 595.49 MB (377 tensors)
clip_model_load: compute allocated memory: 32.89 MB
encode_image_with_clip: image embedding created: 576 tokens
encode_image_with_clip: image encoded in 239.12 ms by CLIP ( 0.42 ms per image patch)
current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:7990
cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1)
GGML_ASSERT: C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:243: !"CUDA error"```
|
{
"login": "jakobhoeg",
"id": 114422072,
"node_id": "U_kgDOBtHxOA",
"avatar_url": "https://avatars.githubusercontent.com/u/114422072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakobhoeg",
"html_url": "https://github.com/jakobhoeg",
"followers_url": "https://api.github.com/users/jakobhoeg/followers",
"following_url": "https://api.github.com/users/jakobhoeg/following{/other_user}",
"gists_url": "https://api.github.com/users/jakobhoeg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakobhoeg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakobhoeg/subscriptions",
"organizations_url": "https://api.github.com/users/jakobhoeg/orgs",
"repos_url": "https://api.github.com/users/jakobhoeg/repos",
"events_url": "https://api.github.com/users/jakobhoeg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakobhoeg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2831/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6897
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6897/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6897/comments
|
https://api.github.com/repos/ollama/ollama/issues/6897/events
|
https://github.com/ollama/ollama/pull/6897
| 2,539,762,123
|
PR_kwDOJ0Z1Ps58ODh5
| 6,897
|
CI iteration
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-20T22:16:20
| 2024-09-20T23:54:05
| 2024-09-20T23:54:02
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6897",
"html_url": "https://github.com/ollama/ollama/pull/6897",
"diff_url": "https://github.com/ollama/ollama/pull/6897.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6897.patch",
"merged_at": null
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6897/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/937
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/937/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/937/comments
|
https://api.github.com/repos/ollama/ollama/issues/937/events
|
https://github.com/ollama/ollama/pull/937
| 1,966,165,485
|
PR_kwDOJ0Z1Ps5eAd1q
| 937
|
clean up: remove server functions from client
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-27T20:40:04
| 2023-10-30T15:10:20
| 2023-10-30T15:10:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/937",
"html_url": "https://github.com/ollama/ollama/pull/937",
"diff_url": "https://github.com/ollama/ollama/pull/937.diff",
"patch_url": "https://github.com/ollama/ollama/pull/937.patch",
"merged_at": "2023-10-30T15:10:19"
}
|
We have had trouble with cross-account file permission when the Ollama client and server are running as different users. This change is a small clean up to remove all calls to server package code from the client (except for `server.Run()`), from now on we should not call anymore server package functions from `cmd` to prevent bugs.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/937/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2567
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2567/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2567/comments
|
https://api.github.com/repos/ollama/ollama/issues/2567/events
|
https://github.com/ollama/ollama/issues/2567
| 2,140,520,023
|
I_kwDOJ0Z1Ps5_lb5X
| 2,567
|
Clarify abou Telemetry
|
{
"login": "user82622",
"id": 88026138,
"node_id": "MDQ6VXNlcjg4MDI2MTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/88026138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/user82622",
"html_url": "https://github.com/user82622",
"followers_url": "https://api.github.com/users/user82622/followers",
"following_url": "https://api.github.com/users/user82622/following{/other_user}",
"gists_url": "https://api.github.com/users/user82622/gists{/gist_id}",
"starred_url": "https://api.github.com/users/user82622/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/user82622/subscriptions",
"organizations_url": "https://api.github.com/users/user82622/orgs",
"repos_url": "https://api.github.com/users/user82622/repos",
"events_url": "https://api.github.com/users/user82622/events{/privacy}",
"received_events_url": "https://api.github.com/users/user82622/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-02-17T19:46:16
| 2024-10-09T18:44:53
| 2024-02-19T16:55:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It seems the ollama binary is using some type of telemetry. Please clarify what this data is and where it is sent to, also give us an option to opt out or better have this as an opt-in. Many users assume this is a private alternative to the big cloud LLM's if the program then has telemetry that potentially reveals private data this can be super misleading.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2567/reactions",
"total_count": 8,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2567/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4254
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4254/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4254/comments
|
https://api.github.com/repos/ollama/ollama/issues/4254/events
|
https://github.com/ollama/ollama/issues/4254
| 2,284,954,866
|
I_kwDOJ0Z1Ps6IMaTy
| 4,254
|
The ollama model how resides on the gpu?
|
{
"login": "lonngxiang",
"id": 40717349,
"node_id": "MDQ6VXNlcjQwNzE3MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/40717349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lonngxiang",
"html_url": "https://github.com/lonngxiang",
"followers_url": "https://api.github.com/users/lonngxiang/followers",
"following_url": "https://api.github.com/users/lonngxiang/following{/other_user}",
"gists_url": "https://api.github.com/users/lonngxiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lonngxiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lonngxiang/subscriptions",
"organizations_url": "https://api.github.com/users/lonngxiang/orgs",
"repos_url": "https://api.github.com/users/lonngxiang/repos",
"events_url": "https://api.github.com/users/lonngxiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/lonngxiang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-05-08T08:01:02
| 2024-05-10T20:19:54
| 2024-05-10T20:19:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4254/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6012
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6012/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6012/comments
|
https://api.github.com/repos/ollama/ollama/issues/6012/events
|
https://github.com/ollama/ollama/issues/6012
| 2,433,388,009
|
I_kwDOJ0Z1Ps6RCo3p
| 6,012
|
/api/chat API returns empty information
|
{
"login": "du-kk",
"id": 14095064,
"node_id": "MDQ6VXNlcjE0MDk1MDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/14095064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/du-kk",
"html_url": "https://github.com/du-kk",
"followers_url": "https://api.github.com/users/du-kk/followers",
"following_url": "https://api.github.com/users/du-kk/following{/other_user}",
"gists_url": "https://api.github.com/users/du-kk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/du-kk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/du-kk/subscriptions",
"organizations_url": "https://api.github.com/users/du-kk/orgs",
"repos_url": "https://api.github.com/users/du-kk/repos",
"events_url": "https://api.github.com/users/du-kk/events{/privacy}",
"received_events_url": "https://api.github.com/users/du-kk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-27T09:17:55
| 2024-07-28T13:56:07
| 2024-07-28T13:54:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
environment:
OS: UOS Linux
GPU:NVIDIA L20
Ollama version:0.2.8
issue
input: curl 127.0.0.1:11434/api/generate - d '{"model": "qwen2:0.5b", "pormpt": "who are you?", "format": "json", "stream": false}'
always returns
{"model": "qwen2:0.5b", "creatd_at": "2024-07-27T08:46:47.54755Z", "response": "" "," done ": true," done_ reason ":" load "}
There are no error messages in Journalctl - u ollama, only one request message
gpu1 ollama[30085]: [GIN] 2024/07/27 - 16:44:36 | 200 | 39.405859ms | 23.36.75.155 | POST "/api/chat"
Please help me, thank you
|
{
"login": "du-kk",
"id": 14095064,
"node_id": "MDQ6VXNlcjE0MDk1MDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/14095064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/du-kk",
"html_url": "https://github.com/du-kk",
"followers_url": "https://api.github.com/users/du-kk/followers",
"following_url": "https://api.github.com/users/du-kk/following{/other_user}",
"gists_url": "https://api.github.com/users/du-kk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/du-kk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/du-kk/subscriptions",
"organizations_url": "https://api.github.com/users/du-kk/orgs",
"repos_url": "https://api.github.com/users/du-kk/repos",
"events_url": "https://api.github.com/users/du-kk/events{/privacy}",
"received_events_url": "https://api.github.com/users/du-kk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6012/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5131
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5131/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5131/comments
|
https://api.github.com/repos/ollama/ollama/issues/5131/events
|
https://github.com/ollama/ollama/pull/5131
| 2,361,170,374
|
PR_kwDOJ0Z1Ps5y5gR0
| 5,131
|
linux.md: Make it clear that ollama does not need to be installed as a service [docs only]
|
{
"login": "crazy2be",
"id": 667720,
"node_id": "MDQ6VXNlcjY2NzcyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/667720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crazy2be",
"html_url": "https://github.com/crazy2be",
"followers_url": "https://api.github.com/users/crazy2be/followers",
"following_url": "https://api.github.com/users/crazy2be/following{/other_user}",
"gists_url": "https://api.github.com/users/crazy2be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/crazy2be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/crazy2be/subscriptions",
"organizations_url": "https://api.github.com/users/crazy2be/orgs",
"repos_url": "https://api.github.com/users/crazy2be/repos",
"events_url": "https://api.github.com/users/crazy2be/events{/privacy}",
"received_events_url": "https://api.github.com/users/crazy2be/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-19T03:13:25
| 2024-11-23T21:32:54
| 2024-11-23T21:32:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5131",
"html_url": "https://github.com/ollama/ollama/pull/5131",
"diff_url": "https://github.com/ollama/ollama/pull/5131.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5131.patch",
"merged_at": null
}
|
I had gotten halfway through these steps before I realized that they were fully optional and overkill for my use case of playing with ollama.
This commit makes it clearer these steps are optional, and only recommended if you will be using ollama quite regularly.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5131/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1146
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1146/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1146/comments
|
https://api.github.com/repos/ollama/ollama/issues/1146/events
|
https://github.com/ollama/ollama/pull/1146
| 1,995,800,258
|
PR_kwDOJ0Z1Ps5fkuCy
| 1,146
|
Add cgo implementation for llama.cpp
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 18
| 2023-11-16T00:18:06
| 2024-01-10T15:57:23
| 2023-12-22T16:16:31
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1146",
"html_url": "https://github.com/ollama/ollama/pull/1146",
"diff_url": "https://github.com/ollama/ollama/pull/1146.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1146.patch",
"merged_at": "2023-12-22T16:16:31"
}
|
This change revamps the way ollama wires up llama.cpp for gguf to link directly via cgo instead
of running a subprocess. Within llama.cpp, a thin facade has been added to server.cpp (via included patch)
to enable extern "C" access to the main logic to minimize changes to the existing LLM interface.
Mac, Linux, and Windows are supported and manually tested.
Carries #1268 and #814
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1146/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1146/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4771
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4771/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4771/comments
|
https://api.github.com/repos/ollama/ollama/issues/4771/events
|
https://github.com/ollama/ollama/issues/4771
| 2,329,337,179
|
I_kwDOJ0Z1Ps6K1t1b
| 4,771
|
Ignoring env, being weird with env
|
{
"login": "RealMrCactus",
"id": 36554881,
"node_id": "MDQ6VXNlcjM2NTU0ODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/36554881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RealMrCactus",
"html_url": "https://github.com/RealMrCactus",
"followers_url": "https://api.github.com/users/RealMrCactus/followers",
"following_url": "https://api.github.com/users/RealMrCactus/following{/other_user}",
"gists_url": "https://api.github.com/users/RealMrCactus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RealMrCactus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RealMrCactus/subscriptions",
"organizations_url": "https://api.github.com/users/RealMrCactus/orgs",
"repos_url": "https://api.github.com/users/RealMrCactus/repos",
"events_url": "https://api.github.com/users/RealMrCactus/events{/privacy}",
"received_events_url": "https://api.github.com/users/RealMrCactus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-01T20:57:38
| 2024-09-16T23:53:17
| 2024-09-16T23:53:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I use systemctl edit to make an override and set OLLAMA_HOST to 0.0.0.0 within the bounds of where it should be and it says its out of bounds in systemctl status i edit the service file directly to add it and it hosts on `[::]:11434` instead
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.39
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4771/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/808
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/808/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/808/comments
|
https://api.github.com/repos/ollama/ollama/issues/808/events
|
https://github.com/ollama/ollama/issues/808
| 1,946,047,540
|
I_kwDOJ0Z1Ps5z_lQ0
| 808
|
Grammar-guided generation support
|
{
"login": "tmc",
"id": 3977,
"node_id": "MDQ6VXNlcjM5Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmc",
"html_url": "https://github.com/tmc",
"followers_url": "https://api.github.com/users/tmc/followers",
"following_url": "https://api.github.com/users/tmc/following{/other_user}",
"gists_url": "https://api.github.com/users/tmc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tmc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tmc/subscriptions",
"organizations_url": "https://api.github.com/users/tmc/orgs",
"repos_url": "https://api.github.com/users/tmc/repos",
"events_url": "https://api.github.com/users/tmc/events{/privacy}",
"received_events_url": "https://api.github.com/users/tmc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6100196012,
"node_id": "LA_kwDOJ0Z1Ps8AAAABa5marA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feedback%20wanted",
"name": "feedback wanted",
"color": "0e8a16",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 15
| 2023-10-16T20:35:30
| 2024-08-07T16:58:37
| 2023-12-04T20:39:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Thoughts on introducing a straightforward way for a Modelfile to point to a grammar and thread that through to sampling/inference?
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/808/reactions",
"total_count": 31,
"+1": 31,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/808/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5154
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5154/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5154/comments
|
https://api.github.com/repos/ollama/ollama/issues/5154/events
|
https://github.com/ollama/ollama/issues/5154
| 2,363,285,116
|
I_kwDOJ0Z1Ps6M3N58
| 5,154
|
Can we add support for `firefunction-v2` Competitive with GPT-4o at function-calling
|
{
"login": "talperetz",
"id": 11588598,
"node_id": "MDQ6VXNlcjExNTg4NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/11588598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talperetz",
"html_url": "https://github.com/talperetz",
"followers_url": "https://api.github.com/users/talperetz/followers",
"following_url": "https://api.github.com/users/talperetz/following{/other_user}",
"gists_url": "https://api.github.com/users/talperetz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talperetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talperetz/subscriptions",
"organizations_url": "https://api.github.com/users/talperetz/orgs",
"repos_url": "https://api.github.com/users/talperetz/repos",
"events_url": "https://api.github.com/users/talperetz/events{/privacy}",
"received_events_url": "https://api.github.com/users/talperetz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-20T00:20:40
| 2024-07-26T00:47:33
| 2024-07-26T00:47:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/fireworks-ai/firefunction-v2
Competitive with GPT-4o at function-calling, scoring 0.81 vs 0.80 on a medley of public evaluations
Trained on Llama 3 and retains Llama 3’s conversation and instruction-following capabilities, scoring 0.84 vs Llama 3’s 0.89 on MT bench
Significant quality improvements over FireFunction v1 across the broad range of metrics
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5154/reactions",
"total_count": 8,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 6
}
|
https://api.github.com/repos/ollama/ollama/issues/5154/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2993
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2993/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2993/comments
|
https://api.github.com/repos/ollama/ollama/issues/2993/events
|
https://github.com/ollama/ollama/issues/2993
| 2,174,986,877
|
I_kwDOJ0Z1Ps6Bo6p9
| 2,993
|
Ollama only runs off CPU in Ubuntu
|
{
"login": "uansah",
"id": 9909355,
"node_id": "MDQ6VXNlcjk5MDkzNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9909355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uansah",
"html_url": "https://github.com/uansah",
"followers_url": "https://api.github.com/users/uansah/followers",
"following_url": "https://api.github.com/users/uansah/following{/other_user}",
"gists_url": "https://api.github.com/users/uansah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uansah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uansah/subscriptions",
"organizations_url": "https://api.github.com/users/uansah/orgs",
"repos_url": "https://api.github.com/users/uansah/repos",
"events_url": "https://api.github.com/users/uansah/events{/privacy}",
"received_events_url": "https://api.github.com/users/uansah/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-07T23:11:47
| 2024-03-11T22:22:39
| 2024-03-11T22:22:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am trying to run Dolphin Mistral on CPU and RAM (no GPU) I have 188gb available but it uses 100% of the CPU at full capacity. Before this I was running it on a virtual machine with the same OS and it used RAM.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2993/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2993/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7817
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7817/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7817/comments
|
https://api.github.com/repos/ollama/ollama/issues/7817/events
|
https://github.com/ollama/ollama/issues/7817
| 2,687,859,027
|
I_kwDOJ0Z1Ps6gNXlT
| 7,817
|
Missing ROCm Library Files In ollama-linux-amd64-rocm.tgz
|
{
"login": "admpalma",
"id": 45296040,
"node_id": "MDQ6VXNlcjQ1Mjk2MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/45296040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/admpalma",
"html_url": "https://github.com/admpalma",
"followers_url": "https://api.github.com/users/admpalma/followers",
"following_url": "https://api.github.com/users/admpalma/following{/other_user}",
"gists_url": "https://api.github.com/users/admpalma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/admpalma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/admpalma/subscriptions",
"organizations_url": "https://api.github.com/users/admpalma/orgs",
"repos_url": "https://api.github.com/users/admpalma/repos",
"events_url": "https://api.github.com/users/admpalma/events{/privacy}",
"received_events_url": "https://api.github.com/users/admpalma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6678628138,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjhPHKg",
"url": "https://api.github.com/repos/ollama/ollama/labels/install",
"name": "install",
"color": "E0B88D",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-11-24T14:58:26
| 2024-12-10T17:47:23
| 2024-12-10T17:47:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
## Issue Description:
From Ollama v0.4.0 onwards, the `libhipblas.so.2` and `libhipblas.so.2.1.60102` files are no longer present in the `ollama-linux-amd64-rocm.tgz` archive.
This makes the library unusable by the ollama instance.
[Original Issue](https://github.com/Jeffser/Alpaca/issues/139#issuecomment-2494935019)
> Ollama [v0.3.14](https://github.com/ollama/ollama/releases/tag/v0.3.14):
>
> ```
> libhipblas.so.2 -> libhipblas.so.2.1.60102
> libhipblas.so.2.1.60102
> ```
>
> Ollama [v0.4.0](https://github.com/ollama/ollama/releases/tag/v0.4.0):
>
> ```
> libhipblas.so -> libhipblas.so.2 (broken link)
> ```
## Found Evidence:
Looking at the build logs ("Run ./scripts/build_linux.sh" step):
### [v0.3.14](https://github.com/ollama/ollama/actions/runs/11432081098/job/31801985394):
### libhipblas:
`libhipblas.so.2` and `libhipblas.so.2.1.60102` are copied to the `linux-amd64-rocm/lib/ollama` path:
##### `linux-amd64-rocm/lib/ollama`:
```
#43 2018.8 + cp -a /opt/rocm/lib/libhipblas.so.2 /opt/rocm/lib/libhipblas.so.2.1.60102 ../../dist/linux-amd64//../linux-amd64-rocm/lib/ollama
...
#43 2018.8 + cp /opt/rocm-6.1.2/lib/libhipblas.so.2.1.60102 ../../dist/linux-amd64//../linux-amd64-rocm/lib/ollama
```
### [v0.4.0](https://github.com/ollama/ollama/actions/runs/11706618932/job/32604128986):
### libhipblas:
`libhipblas.so.2` and `libhipblas.so.2.1.60102` are now copied into `linux-amd64/lib/ollama` and only `libhipblas.so` is copied into `linux-amd64-rocm/lib/ollama`, causing the broken link.
##### `linux-amd64/lib/ollama`:
```
#19 1826.4 cp -af /opt/rocm/lib/libhipblas.so.2.1.60102 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ollama/
#19 1826.4 cp -af /opt/rocm/lib/libhipblas.so.2 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ollama/
```
##### `linux-amd64-rocm/lib/ollama`:
```
#19 1828.5 cp -af /opt/rocm/lib//libhipblas.so /go/src/github.com/ollama/ollama/dist/linux-amd64-rocm/lib/ollama/
```
### librocblas:
Additionally, the `librocblas` related files are copied into `linux-amd64/lib/ollama` when they should only be present in `linux-amd64-rocm/lib/ollama`:
##### `linux-amd64/lib/ollama`:
```
#19 1826.4 cp -af /opt/rocm/lib/librocblas.so.4.1.60102 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ollama/
#19 1827.7 cp -af /opt/rocm/lib/librocblas.so.4 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ollama/
```
##### `linux-amd64-rocm/lib/ollama`:
```
#19 1828.7 cp -af /opt/rocm/lib//librocblas.so /go/src/github.com/ollama/ollama/dist/linux-amd64-rocm/lib/ollama/
#19 1828.7 cp -af /opt/rocm/lib//librocblas.so.4 /go/src/github.com/ollama/ollama/dist/linux-amd64-rocm/lib/ollama/
#19 1828.8 cp -af /opt/rocm-6.1.2/lib//librocblas.so.4.1.60102 /go/src/github.com/ollama/ollama/dist/linux-amd64-rocm/lib/ollama/
```
### [v0.4.4](https://github.com/ollama/ollama/actions/runs/11982171234/job/33409747042):
In `v0.4.4` we can observe the same behavior as in `v0.4.0`:
### libhipblas:
##### `linux-amd64/lib/ollama`:
```
#19 1292.4 cp -af /opt/rocm/lib/libhipblas.so.2.1.60102 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ollama/
#19 1292.4 cp -af /opt/rocm/lib/libhipblas.so.2 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ollama/
```
##### `linux-amd64-rocm/lib/ollama`:
```
#19 1425.8 cp -af /opt/rocm/lib//libhipblas.so /go/src/github.com/ollama/ollama/dist/linux-amd64-rocm/lib/ollama/
```
### librocblas:
##### `linux-amd64/lib/ollama`:
```
#19 1292.4 cp -af /opt/rocm/lib/librocblas.so.4.1.60102 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ollama/
...
#19 1293.7 cp -af /opt/rocm/lib/librocblas.so.4 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ollama/
```
##### `linux-amd64-rocm/lib/ollama`:
```
#19 1753.5 cp -af /opt/rocm/lib//librocblas.so /go/src/github.com/ollama/ollama/dist/linux-amd64-rocm/lib/ollama/
#19 1753.6 cp -af /opt/rocm/lib//librocblas.so.4 /go/src/github.com/ollama/ollama/dist/linux-amd64-rocm/lib/ollama/
#19 1753.6 cp -af /opt/rocm-6.1.2/lib//librocblas.so.4.1.60102 /go/src/github.com/ollama/ollama/dist/linux-amd64-rocm/lib/ollama/
```
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
v0.4.4
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7817/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7501
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7501/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7501/comments
|
https://api.github.com/repos/ollama/ollama/issues/7501/events
|
https://github.com/ollama/ollama/pull/7501
| 2,634,227,187
|
PR_kwDOJ0Z1Ps6A3vGm
| 7,501
|
add AI from @samrgaire10
|
{
"login": "samirgaire10",
"id": 118608337,
"node_id": "U_kgDOBxHR0Q",
"avatar_url": "https://avatars.githubusercontent.com/u/118608337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samirgaire10",
"html_url": "https://github.com/samirgaire10",
"followers_url": "https://api.github.com/users/samirgaire10/followers",
"following_url": "https://api.github.com/users/samirgaire10/following{/other_user}",
"gists_url": "https://api.github.com/users/samirgaire10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samirgaire10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samirgaire10/subscriptions",
"organizations_url": "https://api.github.com/users/samirgaire10/orgs",
"repos_url": "https://api.github.com/users/samirgaire10/repos",
"events_url": "https://api.github.com/users/samirgaire10/events{/privacy}",
"received_events_url": "https://api.github.com/users/samirgaire10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-05T01:51:56
| 2024-11-18T00:09:18
| 2024-11-18T00:09:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7501",
"html_url": "https://github.com/ollama/ollama/pull/7501",
"diff_url": "https://github.com/ollama/ollama/pull/7501.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7501.patch",
"merged_at": null
}
| null |
{
"login": "samirgaire10",
"id": 118608337,
"node_id": "U_kgDOBxHR0Q",
"avatar_url": "https://avatars.githubusercontent.com/u/118608337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samirgaire10",
"html_url": "https://github.com/samirgaire10",
"followers_url": "https://api.github.com/users/samirgaire10/followers",
"following_url": "https://api.github.com/users/samirgaire10/following{/other_user}",
"gists_url": "https://api.github.com/users/samirgaire10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samirgaire10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samirgaire10/subscriptions",
"organizations_url": "https://api.github.com/users/samirgaire10/orgs",
"repos_url": "https://api.github.com/users/samirgaire10/repos",
"events_url": "https://api.github.com/users/samirgaire10/events{/privacy}",
"received_events_url": "https://api.github.com/users/samirgaire10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7501/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1605
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1605/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1605/comments
|
https://api.github.com/repos/ollama/ollama/issues/1605/events
|
https://github.com/ollama/ollama/issues/1605
| 2,048,646,498
|
I_kwDOJ0Z1Ps56G91i
| 1,605
|
Failed to Load Model Error in Ollama 0.0.0
|
{
"login": "mariusraupach",
"id": 108870491,
"node_id": "U_kgDOBn07Ww",
"avatar_url": "https://avatars.githubusercontent.com/u/108870491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariusraupach",
"html_url": "https://github.com/mariusraupach",
"followers_url": "https://api.github.com/users/mariusraupach/followers",
"following_url": "https://api.github.com/users/mariusraupach/following{/other_user}",
"gists_url": "https://api.github.com/users/mariusraupach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariusraupach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariusraupach/subscriptions",
"organizations_url": "https://api.github.com/users/mariusraupach/orgs",
"repos_url": "https://api.github.com/users/mariusraupach/repos",
"events_url": "https://api.github.com/users/mariusraupach/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariusraupach/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2023-12-19T13:19:54
| 2024-03-11T09:30:04
| 2023-12-19T21:04:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Description
Encountered an issue while trying to load a model in Ollama. The error message received is:
"Error: llama runner: failed to load model '/Users/mariusraupach/.ollama/models/blobs/sha256:bdb11b0699e03d791f0accd97279989d810d79615c6cf5ac21fb68e8f33e8ca3': this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull dolphin-mixtral:latest`"
Additionally, when checking the version of Ollama with `ollama -v`, the response was:
"ollama version is 0.0.0
Warning: client version is 0.1.16"
### Reproduction Steps
**Steps to Reproduce:**
1. Run `ollama pull dolphin-mixtral:latest` to update the model.
2. Attempt to run the model in Ollama.
3. Error occurs during the model loading process.
### Expected vs Actual Behavior
**Expected Behavior:**
The model should load successfully after being updated.
**Actual Behavior:**
The model fails to load with an error indicating potential incompatibility with the current Ollama version.
### Environment Details
**Environment:**
- Chip: Apple M1 Max
- Operating System: macOS Sonoma Version 14.2
- Ollama Version: 0.0.0 (client version 0.1.16)
### Attempted Solutions
I've tried updating the model as suggested by the error message, but the issue persists.
|
{
"login": "mariusraupach",
"id": 108870491,
"node_id": "U_kgDOBn07Ww",
"avatar_url": "https://avatars.githubusercontent.com/u/108870491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariusraupach",
"html_url": "https://github.com/mariusraupach",
"followers_url": "https://api.github.com/users/mariusraupach/followers",
"following_url": "https://api.github.com/users/mariusraupach/following{/other_user}",
"gists_url": "https://api.github.com/users/mariusraupach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariusraupach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariusraupach/subscriptions",
"organizations_url": "https://api.github.com/users/mariusraupach/orgs",
"repos_url": "https://api.github.com/users/mariusraupach/repos",
"events_url": "https://api.github.com/users/mariusraupach/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariusraupach/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1605/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/1605/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6308
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6308/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6308/comments
|
https://api.github.com/repos/ollama/ollama/issues/6308/events
|
https://github.com/ollama/ollama/issues/6308
| 2,459,442,385
|
I_kwDOJ0Z1Ps6SmBzR
| 6,308
|
Getting `Error: unexpected status code 200` when pulling a model from an internal registry v0.3.1 and above
|
{
"login": "killerwhile",
"id": 228035,
"node_id": "MDQ6VXNlcjIyODAzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/228035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/killerwhile",
"html_url": "https://github.com/killerwhile",
"followers_url": "https://api.github.com/users/killerwhile/followers",
"following_url": "https://api.github.com/users/killerwhile/following{/other_user}",
"gists_url": "https://api.github.com/users/killerwhile/gists{/gist_id}",
"starred_url": "https://api.github.com/users/killerwhile/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/killerwhile/subscriptions",
"organizations_url": "https://api.github.com/users/killerwhile/orgs",
"repos_url": "https://api.github.com/users/killerwhile/repos",
"events_url": "https://api.github.com/users/killerwhile/events{/privacy}",
"received_events_url": "https://api.github.com/users/killerwhile/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-08-11T06:08:08
| 2024-09-04T08:51:08
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Starting version 0.3.1, when pulling a model from an internal registry (https://distribution.github.io/distribution/), I'm getting the error `unexpected status code 200`.
Version up to 0.3.0 worked properly with this setup.
The line returning the error seems to be https://github.com/ollama/ollama/compare/v0.3.0...v0.3.1#diff-9e32d213fc229fc9c327863932f4fc8a875d854333b5ad2dffa9b43fd0848232R226
# How to reproduce
Via docker-compose, I create a test environment with ollama and registry.
```
version: "3"
services:
ollama:
image: ollama/ollama:0.3.0
ports:
- "11434"
registry:
image: registry:2
environment:
REGISTRY_LOG_LEVEL: debug
REGISTRY_LOG_ACCESSLOG_DISABLED: "false"
ports:
- "5000"
```
Start the test stack via `docker compose up -d`.
In the snippet above, I'm using ollama v0.3.0.
The following commands will pull a model (qwen2:0.5b for sake of size), and push it to the local registry.
```
docker compose exec ollama ollama pull qwen2:0.5b
docker compose exec ollama ollama cp qwen2:0.5b registry:5000/library/qwen2:0.5b
docker compose exec ollama ollama push registry:5000/library/qwen2:0.5b --insecure
```
Now the models can be remove from ollama:
```
docker compose exec ollama ollama rm qwen2:0.5b registry:5000/library/qwen2:0.5b
```
And re-downloaded from the local registry:
```
docker compose exec ollama ollama pull registry:5000/library/qwen2:0.5b --insecure
```
With ollama version up to 0.3.0 (included), this works.
With ollama version from 0.3.1 (included), I'm getting the following error:
```
Error: unexpected status code 200
```
### OS
Docker
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.3.1 and above
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6308/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6925
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6925/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6925/comments
|
https://api.github.com/repos/ollama/ollama/issues/6925/events
|
https://github.com/ollama/ollama/pull/6925
| 2,543,988,914
|
PR_kwDOJ0Z1Ps58cXfW
| 6,925
|
llama: Go server support for Jetsons
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-24T00:17:32
| 2024-10-08T15:53:58
| 2024-10-08T15:53:58
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6925",
"html_url": "https://github.com/ollama/ollama/pull/6925",
"diff_url": "https://github.com/ollama/ollama/pull/6925.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6925.patch",
"merged_at": null
}
|
Complimentary to #6400 for the Go branch
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6925/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6925/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3325
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3325/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3325/comments
|
https://api.github.com/repos/ollama/ollama/issues/3325/events
|
https://github.com/ollama/ollama/issues/3325
| 2,204,443,847
|
I_kwDOJ0Z1Ps6DZSTH
| 3,325
|
Binary for Mac Intel doesn't work.
|
{
"login": "shyamalschandra",
"id": 9545735,
"node_id": "MDQ6VXNlcjk1NDU3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9545735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shyamalschandra",
"html_url": "https://github.com/shyamalschandra",
"followers_url": "https://api.github.com/users/shyamalschandra/followers",
"following_url": "https://api.github.com/users/shyamalschandra/following{/other_user}",
"gists_url": "https://api.github.com/users/shyamalschandra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shyamalschandra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shyamalschandra/subscriptions",
"organizations_url": "https://api.github.com/users/shyamalschandra/orgs",
"repos_url": "https://api.github.com/users/shyamalschandra/repos",
"events_url": "https://api.github.com/users/shyamalschandra/events{/privacy}",
"received_events_url": "https://api.github.com/users/shyamalschandra/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677279472,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A",
"url": "https://api.github.com/repos/ollama/ollama/labels/macos",
"name": "macos",
"color": "E2DBC0",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-03-24T16:35:25
| 2024-04-12T22:12:53
| 2024-04-12T22:12:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Binary for Mac Intel doesn't work and is corrupted before installation.
### What did you expect to see?
No problems with starting ollama-gui.
### Steps to reproduce
Download the Mac Intel version of ollama-gui and double-click to install it to Applications.
### Are there any recent changes that introduced the issue?
No idea.
### OS
macOS
### Architecture
x86
### Platform
_No response_
### Ollama version
0.1.29
### GPU
AMD
### GPU info
Graphics/Displays:
Radeon Pro 575X:
Chipset Model: Radeon Pro 575X
Type: GPU
Bus: PCIe
PCIe Lane Width: x16
VRAM (Total): 4 GB
Vendor: AMD (0x1002)
Device ID: 0x67df
Revision ID: 0x00c4
ROM Revision: 113-D0008A-042
VBIOS Version: 113-D0008A14GP-003
EFI Driver Version: 01.B1.042
Metal Support: Metal 2
Displays:
iMac:
Display Type: Built-In Retina LCD
Resolution: Retina 5K (5120 x 2880)
Framebuffer Depth: 30-Bit Color (ARGB2101010)
Main Display: Yes
Mirror: Off
Online: Yes
Automatically Adjust Brightness: Yes
Connection Type: Internal
### CPU
AMD
### Other software
MacOS Sonoma 14.4 (23E214)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3325/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5491
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5491/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5491/comments
|
https://api.github.com/repos/ollama/ollama/issues/5491/events
|
https://github.com/ollama/ollama/pull/5491
| 2,391,611,605
|
PR_kwDOJ0Z1Ps50fDUy
| 5,491
|
Fix assert on small embedding inputs
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-05T01:19:10
| 2024-07-05T15:20:59
| 2024-07-05T15:20:57
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5491",
"html_url": "https://github.com/ollama/ollama/pull/5491",
"diff_url": "https://github.com/ollama/ollama/pull/5491.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5491.patch",
"merged_at": "2024-07-05T15:20:57"
}
|
Tensors allocated for pooling layers were too small on 2-3 character inputs causing assertions to be fired.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5491/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7802
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7802/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7802/comments
|
https://api.github.com/repos/ollama/ollama/issues/7802/events
|
https://github.com/ollama/ollama/issues/7802
| 2,684,362,922
|
I_kwDOJ0Z1Ps6gACCq
| 7,802
|
minimum viable GGUF crashes server on run
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-11-22T19:32:33
| 2024-11-22T19:35:17
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I ran `ollama run bmizerany/smol`, and saw the server crash violently.
I expected ollama to tell me, from the terminal session running `ollama run`, it could not run the model for `<reasons>`, and for the server to remain running an unaffected.
```
# Client
; ollama run bmizerany/smol
```
```
# Server
[GIN] 2024/11/22 - 11:32:09 | 200 | 305.583µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/22 - 11:32:09 | 200 | 2.020416ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/11/22 - 11:32:09 | 200 | 550.916µs | 127.0.0.1 | POST "/api/show"
time=2024-11-22T11:32:09.588-08:00 level=WARN source=memory.go:115 msg="model missing blk.0 layer size"
panic: runtime error: integer divide by zero
goroutine 27 [running]:
github.com/ollama/ollama/llm.EstimateGPULayers({_, _, _}, _, {_, _, _},
{{0x0, 0x800, 0x200, ...}, ...})
github.com/ollama/ollama/llm/memory.go:122 +0x13f0
github.com/ollama/ollama/llm.PredictServerFit({0x14000495cb8?, 0x104ae52b4?, 0x1400001a090?}, 0x1400059a920, {0x199?, 0x105681bc0?, _}, {_, _, _}, ...)
github.com/ollama/ollama/llm/memory.go:20 +0xa8
github.com/ollama/ollama/server.pickBestFitGPUs(0x140001d0900, 0x1400059a920, {0x140004aa780?, 0xfffffffffffffffc?, 0x105286653?})
github.com/ollama/ollama/server/sched.go:627 +0x2a0
github.com/ollama/ollama/server.(*Scheduler).processPending(0x140000c39e0, {0x10575b8d0, 0x140000c5ea0})
github.com/ollama/ollama/server/sched.go:170 +0xac0
github.com/ollama/ollama/server.(*Scheduler).Run.func1()
github.com/ollama/ollama/server/sched.go:96 +0x28
created by github.com/ollama/ollama/server.(*Scheduler).Run in goroutine 1
github.com/ollama/ollama/server/sched.go:95 +0xc4
2024/11/22 11:32:10 routes.go:1060: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/Users/bmizerany/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR:]"
time=2024-11-22T11:32:10.640-08:00 level=INFO source=images.go:725 msg="total blobs: 6"
time=2024-11-22T11:32:10.641-08:00 level=INFO source=images.go:732 msg="total unused blobs removed: 0"
time=2024-11-22T11:32:10.642-08:00 level=INFO source=routes.go:1106 msg="Listening on 127.0.0.1:11434 (version 0.1.45)"
time=2024-11-22T11:32:10.652-08:00 level=WARN source=assets.go:100 msg="unable to cleanup stale tmpdir" path=/var/folders/db/svmm3t1x3yn4d1skpbq3ddv00000gn/T/ollama2998818457 error="remove /var/folders/db/svmm3t1x3yn4d1skpbq3ddv00000gn/T/ollama2998818457: directory not empty"
time=2024-11-22T11:32:10.652-08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/db/svmm3t1x3yn4d1skpbq3ddv00000gn/T/ollama827611131/runners
time=2024-11-22T11:32:10.679-08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]"
time=2024-11-22T11:32:10.740-08:00 level=INFO source=types.go:98 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="96.0 GiB" available="96.0 GiB"
```
The GGUF (xxd):
```
00000000: 4747 5546 0300 0000 0000 0000 0000 0000 GGUF............
00000010: 0000 0000 0000 0000 ........
```
### OS
Darwin MacBook-Pro-3.attlocal.net 23.4.0 Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:37 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6031 arm64
### GPU
local
### CPU
see above
### Ollama version
ollama version is 0.4.3
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7802/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6004
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6004/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6004/comments
|
https://api.github.com/repos/ollama/ollama/issues/6004/events
|
https://github.com/ollama/ollama/issues/6004
| 2,433,190,929
|
I_kwDOJ0Z1Ps6RB4wR
| 6,004
|
Each word gets returned instead of the entire message being send
|
{
"login": "SusgUY446",
"id": 129160115,
"node_id": "U_kgDOB7LTsw",
"avatar_url": "https://avatars.githubusercontent.com/u/129160115?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SusgUY446",
"html_url": "https://github.com/SusgUY446",
"followers_url": "https://api.github.com/users/SusgUY446/followers",
"following_url": "https://api.github.com/users/SusgUY446/following{/other_user}",
"gists_url": "https://api.github.com/users/SusgUY446/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SusgUY446/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SusgUY446/subscriptions",
"organizations_url": "https://api.github.com/users/SusgUY446/orgs",
"repos_url": "https://api.github.com/users/SusgUY446/repos",
"events_url": "https://api.github.com/users/SusgUY446/events{/privacy}",
"received_events_url": "https://api.github.com/users/SusgUY446/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-07-27T02:20:30
| 2024-07-30T17:31:23
| 2024-07-30T17:31:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using curl to access the api it returns each word as its own json.
### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
0.3.0
|
{
"login": "SusgUY446",
"id": 129160115,
"node_id": "U_kgDOB7LTsw",
"avatar_url": "https://avatars.githubusercontent.com/u/129160115?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SusgUY446",
"html_url": "https://github.com/SusgUY446",
"followers_url": "https://api.github.com/users/SusgUY446/followers",
"following_url": "https://api.github.com/users/SusgUY446/following{/other_user}",
"gists_url": "https://api.github.com/users/SusgUY446/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SusgUY446/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SusgUY446/subscriptions",
"organizations_url": "https://api.github.com/users/SusgUY446/orgs",
"repos_url": "https://api.github.com/users/SusgUY446/repos",
"events_url": "https://api.github.com/users/SusgUY446/events{/privacy}",
"received_events_url": "https://api.github.com/users/SusgUY446/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6004/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6242
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6242/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6242/comments
|
https://api.github.com/repos/ollama/ollama/issues/6242/events
|
https://github.com/ollama/ollama/pull/6242
| 2,454,290,729
|
PR_kwDOJ0Z1Ps53waTo
| 6,242
|
whisper branch merge conflicts
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-07T20:28:19
| 2024-08-07T20:28:50
| 2024-08-07T20:28:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6242",
"html_url": "https://github.com/ollama/ollama/pull/6242",
"diff_url": "https://github.com/ollama/ollama/pull/6242.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6242.patch",
"merged_at": "2024-08-07T20:28:30"
}
| null |
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6242/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7427
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7427/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7427/comments
|
https://api.github.com/repos/ollama/ollama/issues/7427/events
|
https://github.com/ollama/ollama/issues/7427
| 2,624,998,518
|
I_kwDOJ0Z1Ps6cdkx2
| 7,427
|
Reporting for not working models, uploaded by users
|
{
"login": "ODDda",
"id": 56647606,
"node_id": "MDQ6VXNlcjU2NjQ3NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/56647606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ODDda",
"html_url": "https://github.com/ODDda",
"followers_url": "https://api.github.com/users/ODDda/followers",
"following_url": "https://api.github.com/users/ODDda/following{/other_user}",
"gists_url": "https://api.github.com/users/ODDda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ODDda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ODDda/subscriptions",
"organizations_url": "https://api.github.com/users/ODDda/orgs",
"repos_url": "https://api.github.com/users/ODDda/repos",
"events_url": "https://api.github.com/users/ODDda/events{/privacy}",
"received_events_url": "https://api.github.com/users/ODDda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2024-10-30T18:38:08
| 2024-11-17T14:18:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, can you add in your library on the side, option for reporting not working models? For example, one of them is this model:
https://ollama.com/leeplenty/lumimaid-v0.2:12b
It just spouts nonsense.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7427/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3587
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3587/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3587/comments
|
https://api.github.com/repos/ollama/ollama/issues/3587/events
|
https://github.com/ollama/ollama/pull/3587
| 2,236,616,054
|
PR_kwDOJ0Z1Ps5sTRgl
| 3,587
|
types/model: remove DisplayLong
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-10T23:54:49
| 2024-04-10T23:55:13
| 2024-04-10T23:55:12
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3587",
"html_url": "https://github.com/ollama/ollama/pull/3587",
"diff_url": "https://github.com/ollama/ollama/pull/3587.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3587.patch",
"merged_at": "2024-04-10T23:55:12"
}
| null |
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3587/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4278
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4278/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4278/comments
|
https://api.github.com/repos/ollama/ollama/issues/4278/events
|
https://github.com/ollama/ollama/pull/4278
| 2,287,110,167
|
PR_kwDOJ0Z1Ps5u9bv0
| 4,278
|
merge code
|
{
"login": "uppercaveman",
"id": 4667056,
"node_id": "MDQ6VXNlcjQ2NjcwNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4667056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uppercaveman",
"html_url": "https://github.com/uppercaveman",
"followers_url": "https://api.github.com/users/uppercaveman/followers",
"following_url": "https://api.github.com/users/uppercaveman/following{/other_user}",
"gists_url": "https://api.github.com/users/uppercaveman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uppercaveman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uppercaveman/subscriptions",
"organizations_url": "https://api.github.com/users/uppercaveman/orgs",
"repos_url": "https://api.github.com/users/uppercaveman/repos",
"events_url": "https://api.github.com/users/uppercaveman/events{/privacy}",
"received_events_url": "https://api.github.com/users/uppercaveman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-09T07:50:17
| 2024-05-09T07:50:34
| 2024-05-09T07:50:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4278",
"html_url": "https://github.com/ollama/ollama/pull/4278",
"diff_url": "https://github.com/ollama/ollama/pull/4278.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4278.patch",
"merged_at": null
}
| null |
{
"login": "uppercaveman",
"id": 4667056,
"node_id": "MDQ6VXNlcjQ2NjcwNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4667056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uppercaveman",
"html_url": "https://github.com/uppercaveman",
"followers_url": "https://api.github.com/users/uppercaveman/followers",
"following_url": "https://api.github.com/users/uppercaveman/following{/other_user}",
"gists_url": "https://api.github.com/users/uppercaveman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uppercaveman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uppercaveman/subscriptions",
"organizations_url": "https://api.github.com/users/uppercaveman/orgs",
"repos_url": "https://api.github.com/users/uppercaveman/repos",
"events_url": "https://api.github.com/users/uppercaveman/events{/privacy}",
"received_events_url": "https://api.github.com/users/uppercaveman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4278/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6959
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6959/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6959/comments
|
https://api.github.com/repos/ollama/ollama/issues/6959/events
|
https://github.com/ollama/ollama/pull/6959
| 2,548,651,075
|
PR_kwDOJ0Z1Ps58sWqn
| 6,959
|
update default model to llama3.2
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-09-25T17:59:59
| 2024-09-26T15:08:42
| 2024-09-25T18:11:22
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6959",
"html_url": "https://github.com/ollama/ollama/pull/6959",
"diff_url": "https://github.com/ollama/ollama/pull/6959.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6959.patch",
"merged_at": "2024-09-25T18:11:22"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6959/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2315
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2315/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2315/comments
|
https://api.github.com/repos/ollama/ollama/issues/2315/events
|
https://github.com/ollama/ollama/issues/2315
| 2,113,774,563
|
I_kwDOJ0Z1Ps59_aPj
| 2,315
|
Apple gpu support for Linux
|
{
"login": "maxiwee69",
"id": 81492222,
"node_id": "MDQ6VXNlcjgxNDkyMjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/81492222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxiwee69",
"html_url": "https://github.com/maxiwee69",
"followers_url": "https://api.github.com/users/maxiwee69/followers",
"following_url": "https://api.github.com/users/maxiwee69/following{/other_user}",
"gists_url": "https://api.github.com/users/maxiwee69/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxiwee69/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxiwee69/subscriptions",
"organizations_url": "https://api.github.com/users/maxiwee69/orgs",
"repos_url": "https://api.github.com/users/maxiwee69/repos",
"events_url": "https://api.github.com/users/maxiwee69/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxiwee69/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2024-02-02T00:17:53
| 2024-12-30T05:45:08
| 2024-02-02T10:20:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
So maybe you know about [https://asahilinux.org/](https://asahilinux.org/), if not, it’s Fedora for m series Mac’s. But when i tried to get ollama to run on it, i got it told me `WARNING: No NVIDIA GPU detected. Ollama will run in CPU-only mode`, i know fixing this would only be a fix for such a small amount of people but i would highly appreciate it.
|
{
"login": "maxiwee69",
"id": 81492222,
"node_id": "MDQ6VXNlcjgxNDkyMjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/81492222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxiwee69",
"html_url": "https://github.com/maxiwee69",
"followers_url": "https://api.github.com/users/maxiwee69/followers",
"following_url": "https://api.github.com/users/maxiwee69/following{/other_user}",
"gists_url": "https://api.github.com/users/maxiwee69/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxiwee69/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxiwee69/subscriptions",
"organizations_url": "https://api.github.com/users/maxiwee69/orgs",
"repos_url": "https://api.github.com/users/maxiwee69/repos",
"events_url": "https://api.github.com/users/maxiwee69/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxiwee69/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2315/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2315/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1457
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1457/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1457/comments
|
https://api.github.com/repos/ollama/ollama/issues/1457/events
|
https://github.com/ollama/ollama/issues/1457
| 2,034,648,458
|
I_kwDOJ0Z1Ps55RkWK
| 1,457
|
How to add MOE model Mistral 8x7B
|
{
"login": "SabareeshGC",
"id": 114115146,
"node_id": "U_kgDOBs1CSg",
"avatar_url": "https://avatars.githubusercontent.com/u/114115146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SabareeshGC",
"html_url": "https://github.com/SabareeshGC",
"followers_url": "https://api.github.com/users/SabareeshGC/followers",
"following_url": "https://api.github.com/users/SabareeshGC/following{/other_user}",
"gists_url": "https://api.github.com/users/SabareeshGC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SabareeshGC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SabareeshGC/subscriptions",
"organizations_url": "https://api.github.com/users/SabareeshGC/orgs",
"repos_url": "https://api.github.com/users/SabareeshGC/repos",
"events_url": "https://api.github.com/users/SabareeshGC/events{/privacy}",
"received_events_url": "https://api.github.com/users/SabareeshGC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-12-11T01:18:31
| 2023-12-13T22:15:11
| 2023-12-13T22:15:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Here is the model not sure how to add it to ollama
https://huggingface.co/mattshumer/mistral-8x7b-chat
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1457/reactions",
"total_count": 9,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1457/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7214
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7214/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7214/comments
|
https://api.github.com/repos/ollama/ollama/issues/7214/events
|
https://github.com/ollama/ollama/issues/7214
| 2,589,769,883
|
I_kwDOJ0Z1Ps6aXMCb
| 7,214
|
Unable to serve models
|
{
"login": "adesso-dominik-chodounsky",
"id": 162981954,
"node_id": "U_kgDOCbboQg",
"avatar_url": "https://avatars.githubusercontent.com/u/162981954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adesso-dominik-chodounsky",
"html_url": "https://github.com/adesso-dominik-chodounsky",
"followers_url": "https://api.github.com/users/adesso-dominik-chodounsky/followers",
"following_url": "https://api.github.com/users/adesso-dominik-chodounsky/following{/other_user}",
"gists_url": "https://api.github.com/users/adesso-dominik-chodounsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adesso-dominik-chodounsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adesso-dominik-chodounsky/subscriptions",
"organizations_url": "https://api.github.com/users/adesso-dominik-chodounsky/orgs",
"repos_url": "https://api.github.com/users/adesso-dominik-chodounsky/repos",
"events_url": "https://api.github.com/users/adesso-dominik-chodounsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/adesso-dominik-chodounsky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-15T20:09:13
| 2024-10-28T10:54:31
| 2024-10-15T20:11:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I downloaded the MacOS version of ollama, I am able to pull models and run them, but not serve them.
I get the following error: `Error: listen tcp 127.0.0.1:11434: bind: address already in use`
I have tried restarting the system, I have retried reinstallation, killing the process id, but every time I run `ollama serve`, I still get this error.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.13
|
{
"login": "adesso-dominik-chodounsky",
"id": 162981954,
"node_id": "U_kgDOCbboQg",
"avatar_url": "https://avatars.githubusercontent.com/u/162981954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adesso-dominik-chodounsky",
"html_url": "https://github.com/adesso-dominik-chodounsky",
"followers_url": "https://api.github.com/users/adesso-dominik-chodounsky/followers",
"following_url": "https://api.github.com/users/adesso-dominik-chodounsky/following{/other_user}",
"gists_url": "https://api.github.com/users/adesso-dominik-chodounsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adesso-dominik-chodounsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adesso-dominik-chodounsky/subscriptions",
"organizations_url": "https://api.github.com/users/adesso-dominik-chodounsky/orgs",
"repos_url": "https://api.github.com/users/adesso-dominik-chodounsky/repos",
"events_url": "https://api.github.com/users/adesso-dominik-chodounsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/adesso-dominik-chodounsky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7214/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6564
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6564/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6564/comments
|
https://api.github.com/repos/ollama/ollama/issues/6564/events
|
https://github.com/ollama/ollama/issues/6564
| 2,496,370,231
|
I_kwDOJ0Z1Ps6Uy5Y3
| 6,564
|
add Qwen2-VL
|
{
"login": "FelisDwan",
"id": 99171996,
"node_id": "U_kgDOBek-nA",
"avatar_url": "https://avatars.githubusercontent.com/u/99171996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FelisDwan",
"html_url": "https://github.com/FelisDwan",
"followers_url": "https://api.github.com/users/FelisDwan/followers",
"following_url": "https://api.github.com/users/FelisDwan/following{/other_user}",
"gists_url": "https://api.github.com/users/FelisDwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FelisDwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FelisDwan/subscriptions",
"organizations_url": "https://api.github.com/users/FelisDwan/orgs",
"repos_url": "https://api.github.com/users/FelisDwan/repos",
"events_url": "https://api.github.com/users/FelisDwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/FelisDwan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 102
| 2024-08-30T06:38:18
| 2025-01-30T04:33:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
SOTA light weight vision model
[https://github.com/QwenLM/Qwen2-VL](https://github.com/QwenLM/Qwen2-VL)
llama.cpp issue [#9246](https://github.com/ggerganov/llama.cpp/issues/9246)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6564/reactions",
"total_count": 237,
"+1": 190,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 36,
"eyes": 11
}
|
https://api.github.com/repos/ollama/ollama/issues/6564/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5710
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5710/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5710/comments
|
https://api.github.com/repos/ollama/ollama/issues/5710/events
|
https://github.com/ollama/ollama/pull/5710
| 2,409,729,969
|
PR_kwDOJ0Z1Ps51cMLb
| 5,710
|
Bump linux ROCm to 6.1.2
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-15T22:11:23
| 2024-07-16T02:50:17
| 2024-07-15T22:32:18
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5710",
"html_url": "https://github.com/ollama/ollama/pull/5710",
"diff_url": "https://github.com/ollama/ollama/pull/5710.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5710.patch",
"merged_at": "2024-07-15T22:32:18"
}
|
This might help #5708
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5710/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7528
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7528/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7528/comments
|
https://api.github.com/repos/ollama/ollama/issues/7528/events
|
https://github.com/ollama/ollama/issues/7528
| 2,638,535,011
|
I_kwDOJ0Z1Ps6dRNlj
| 7,528
|
Cannot get a model prefixed with a namespace from /v1/models/{model} endpoint
|
{
"login": "skobkin",
"id": 967576,
"node_id": "MDQ6VXNlcjk2NzU3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/967576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skobkin",
"html_url": "https://github.com/skobkin",
"followers_url": "https://api.github.com/users/skobkin/followers",
"following_url": "https://api.github.com/users/skobkin/following{/other_user}",
"gists_url": "https://api.github.com/users/skobkin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skobkin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skobkin/subscriptions",
"organizations_url": "https://api.github.com/users/skobkin/orgs",
"repos_url": "https://api.github.com/users/skobkin/repos",
"events_url": "https://api.github.com/users/skobkin/events{/privacy}",
"received_events_url": "https://api.github.com/users/skobkin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 4
| 2024-11-06T16:02:04
| 2025-01-22T15:35:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
My initial goal is to check if specific model is available using Ollama API.
I use OpenAI library `github.com/sashabaranov/go-openai` to do that.
The problem is when I try to get models which are not present in the main catalog and have an author prefix, I'm getting `404` and non-JSON response like if the routing is failing.
To eliminate the possibility of the library bug I tried to do the same with CURL.
The first thing I did is requests full list of models to be sure that the model is really in the list:
``` shell
curl http://localhost:11434/v1/models
```
which resulted in the following response:
```json
{"object":"list","data":[{"id":"T-lite-instruct-0.1.Q4_K_M.gguf:latest","object":"model","created":1730118522,"owned_by":"library"},{"id":"x/llama3.2-vision:11b-instruct-q8_0","object":"model","created":1729612391,"owned_by":"x"},{"id":"mannix/llama3.1-8b-lexi:q8_0","object":"model","created":1729272492,"owned_by":"mannix"},{"id":"llama3.2:3b-instruct-q4_K_M","object":"model","created":1727650425,"owned_by":"library"},{"id":"reader-lm:1.5b-q6_K","object":"model","created":1727092629,"owned_by":"library"},{"id":"qwen2:7b-instruct-q4_K_M","object":"model","created":1727092627,"owned_by":"library"},{"id":"phi3:14b-medium-4k-instruct-q4_K_M","object":"model","created":1727092620,"owned_by":"library"},{"id":"nuextract:3.8b-q4_K_M","object":"model","created":1727092619,"owned_by":"library"},{"id":"hermes3:8b-llama3.1-q6_K","object":"model","created":1727092618,"owned_by":"library"},{"id":"nomic-embed-text:latest","object":"model","created":1727092617,"owned_by":"library"},{"id":"nemotron-mini:4b-instruct-q4_K_M","object":"model","created":1727092615,"owned_by":"library"},{"id":"mxbai-embed-large:latest","object":"model","created":1727092614,"owned_by":"library"},{"id":"mistral-nemo:12b-instruct-2407-q4_K_M","object":"model","created":1727092613,"owned_by":"library"},{"id":"llama3.1:8b-instruct-q6_K","object":"model","created":1727092609,"owned_by":"library"},{"id":"gemma2:9b-instruct-q4_K_M","object":"model","created":1727092603,"owned_by":"library"},{"id":"gemma2:27b-instruct-q4_K_M","object":"model","created":1727092601,"owned_by":"library"},{"id":"deepseek-coder-v2:16b-lite-instruct-q4_K_M","object":"model","created":1727092600,"owned_by":"library"},{"id":"codellama:13b-instruct-q4_K_M","object":"model","created":1727092599,"owned_by":"library"},"id":"codegemma:latest","object":"model","created":1727092598,"owned_by":"library"},{"id":"phi3:3.8b-mini-instruct-4k-q4_K_M","object":"model","created":1713898146,"owned_by":"library"}]}
```
So here's the model I wanted to request: `{"id"
:"mannix/llama3.1-8b-lexi:q8_0","object":"model","created":1729272492,"owned_by":"mannix"}`
```shell
curl http://localhost:11434/v1/models/mannix/llama3.1-8b-lexi:q8_0
```
No luck:
```
404 page not found%
```
The problem is very obvious if you know how back-end works. It looks really like a routing problem.
The obvious solution for me was to encode the value to remove the `/` character from the URL:
```shell
curl http://localhost:11434/v1/models/mannix%2Fllama3.1-8b-lexi%3Aq8_0
```
I've got an error again:
```
404 page not found%
```
To ensure that I'm using correct endpoint, I've tried with another model:
```shell
curl http://localhost:11434/v1/models/qwen2:7b-instruct-q4_K_M
```
It worked:
```json
{"id":"qwen2:7b-instruct-q4_K_M","object":"model","created":1727092627,"owned_by":"library"}
```
When I request non-encoded model ID, Ollama logs show this:
```
ollama | [GIN] 2024/11/06 - 15:59:11 | 404 | 13.715µs | 172.24.0.1 | GET "/v1/models/mannix/llama3.1-8b-lexi:q8_0"
```
When I request encoded model ID, Ollama logs show this:
```
ollama | [GIN] 2024/11/06 - 15:58:01 | 404 | 15.239µs | 172.24.0.1 | GET "/v1/models/mannix/llama3.1-8b-lexi:q8_0"
```
Which is basically the same result, so I guess the reason of a failure may be in the fact that URL is being decoded before matching the route for some reason.
Most likely the same should be applicable for all previous Ollama versions and not only `0.4.0-rc6`
### OS
Docker
### GPU
AMD
### CPU
AMD
### Ollama version
0.4.0-rc6
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7528/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4351
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4351/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4351/comments
|
https://api.github.com/repos/ollama/ollama/issues/4351/events
|
https://github.com/ollama/ollama/issues/4351
| 2,290,790,936
|
I_kwDOJ0Z1Ps6IirIY
| 4,351
|
Pulling Multiple Models at Once
|
{
"login": "gusanmaz",
"id": 2552975,
"node_id": "MDQ6VXNlcjI1NTI5NzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2552975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gusanmaz",
"html_url": "https://github.com/gusanmaz",
"followers_url": "https://api.github.com/users/gusanmaz/followers",
"following_url": "https://api.github.com/users/gusanmaz/following{/other_user}",
"gists_url": "https://api.github.com/users/gusanmaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gusanmaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gusanmaz/subscriptions",
"organizations_url": "https://api.github.com/users/gusanmaz/orgs",
"repos_url": "https://api.github.com/users/gusanmaz/repos",
"events_url": "https://api.github.com/users/gusanmaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/gusanmaz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 10
| 2024-05-11T08:48:02
| 2025-01-29T08:01:58
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be nice if it could be possible to pull multiple models in one go in Ollama.
Today, I tried to run
```
ollama pull llava-phi3 llava-llama3 llama3-gradient phi3 moondream codeqwen
```
and it gave following error:
```
Error: accepts 1 arg(s), received 6
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4351/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4351/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7972
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7972/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7972/comments
|
https://api.github.com/repos/ollama/ollama/issues/7972/events
|
https://github.com/ollama/ollama/issues/7972
| 2,723,571,177
|
I_kwDOJ0Z1Ps6iVmXp
| 7,972
|
Pleias
|
{
"login": "KyNorthstar",
"id": 10189808,
"node_id": "MDQ6VXNlcjEwMTg5ODA4",
"avatar_url": "https://avatars.githubusercontent.com/u/10189808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KyNorthstar",
"html_url": "https://github.com/KyNorthstar",
"followers_url": "https://api.github.com/users/KyNorthstar/followers",
"following_url": "https://api.github.com/users/KyNorthstar/following{/other_user}",
"gists_url": "https://api.github.com/users/KyNorthstar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KyNorthstar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KyNorthstar/subscriptions",
"organizations_url": "https://api.github.com/users/KyNorthstar/orgs",
"repos_url": "https://api.github.com/users/KyNorthstar/repos",
"events_url": "https://api.github.com/users/KyNorthstar/events{/privacy}",
"received_events_url": "https://api.github.com/users/KyNorthstar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 2
| 2024-12-06T17:26:34
| 2024-12-14T16:25:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[Pleias has been announced!](https://huggingface.co/blog/Pclanglais/common-models) An LLM trained only on text it's allowed to train on.
It would be fantastic to have this available in ollama, especially for us folks using computers with plenty of system RAM but barely any VRAM.
Here's a list of the models: https://huggingface.co/collections/PleIAs/common-models-674cd0667951ab7c4ef84cc4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7972/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7972/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7561
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7561/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7561/comments
|
https://api.github.com/repos/ollama/ollama/issues/7561/events
|
https://github.com/ollama/ollama/issues/7561
| 2,641,795,436
|
I_kwDOJ0Z1Ps6ddpls
| 7,561
|
/api/generate missing after new 0.4.0 release
|
{
"login": "chrisspen",
"id": 116631,
"node_id": "MDQ6VXNlcjExNjYzMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/116631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisspen",
"html_url": "https://github.com/chrisspen",
"followers_url": "https://api.github.com/users/chrisspen/followers",
"following_url": "https://api.github.com/users/chrisspen/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisspen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisspen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisspen/subscriptions",
"organizations_url": "https://api.github.com/users/chrisspen/orgs",
"repos_url": "https://api.github.com/users/chrisspen/repos",
"events_url": "https://api.github.com/users/chrisspen/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisspen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-11-07T18:04:18
| 2024-11-07T18:09:54
| 2024-11-07T18:09:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I installed the most recent version of ollama yesterday, to test the new ollama vision model, but now all calls to [/api/generate](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion) result in a 404 error.
Has this path been silently deprecated? The docs still say it's there, but the most recent release no longer supports it.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.0
|
{
"login": "chrisspen",
"id": 116631,
"node_id": "MDQ6VXNlcjExNjYzMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/116631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisspen",
"html_url": "https://github.com/chrisspen",
"followers_url": "https://api.github.com/users/chrisspen/followers",
"following_url": "https://api.github.com/users/chrisspen/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisspen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisspen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisspen/subscriptions",
"organizations_url": "https://api.github.com/users/chrisspen/orgs",
"repos_url": "https://api.github.com/users/chrisspen/repos",
"events_url": "https://api.github.com/users/chrisspen/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisspen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7561/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4110
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4110/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4110/comments
|
https://api.github.com/repos/ollama/ollama/issues/4110/events
|
https://github.com/ollama/ollama/pull/4110
| 2,276,689,238
|
PR_kwDOJ0Z1Ps5ua-rw
| 4,110
|
split binaries into metadata and data layers
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-05-03T00:06:22
| 2024-11-21T18:20:59
| 2024-11-21T18:20:59
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4110",
"html_url": "https://github.com/ollama/ollama/pull/4110",
"diff_url": "https://github.com/ollama/ollama/pull/4110.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4110.patch",
"merged_at": null
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4110/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/471
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/471/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/471/comments
|
https://api.github.com/repos/ollama/ollama/issues/471/events
|
https://github.com/ollama/ollama/pull/471
| 1,882,785,311
|
PR_kwDOJ0Z1Ps5ZnhQR
| 471
|
fix empty response
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-05T22:03:48
| 2023-09-05T22:23:06
| 2023-09-05T22:23:05
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/471",
"html_url": "https://github.com/ollama/ollama/pull/471",
"diff_url": "https://github.com/ollama/ollama/pull/471.diff",
"patch_url": "https://github.com/ollama/ollama/pull/471.patch",
"merged_at": "2023-09-05T22:23:05"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/471/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2617
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2617/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2617/comments
|
https://api.github.com/repos/ollama/ollama/issues/2617/events
|
https://github.com/ollama/ollama/issues/2617
| 2,144,975,681
|
I_kwDOJ0Z1Ps5_2btB
| 2,617
|
Modelfile doesn't update (`keep_alive`?)
|
{
"login": "Red-exe-Engineer",
"id": 90420989,
"node_id": "MDQ6VXNlcjkwNDIwOTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/90420989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Red-exe-Engineer",
"html_url": "https://github.com/Red-exe-Engineer",
"followers_url": "https://api.github.com/users/Red-exe-Engineer/followers",
"following_url": "https://api.github.com/users/Red-exe-Engineer/following{/other_user}",
"gists_url": "https://api.github.com/users/Red-exe-Engineer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Red-exe-Engineer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Red-exe-Engineer/subscriptions",
"organizations_url": "https://api.github.com/users/Red-exe-Engineer/orgs",
"repos_url": "https://api.github.com/users/Red-exe-Engineer/repos",
"events_url": "https://api.github.com/users/Red-exe-Engineer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Red-exe-Engineer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-20T18:06:18
| 2024-02-20T18:53:41
| 2024-02-20T18:48:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I've noticed that whenever I run `ollama create <name> -f </path/to/modelfile>` the Ollama server needs to restart or wait for `keep_alive` to quit before the changes fully apply.
`ollama show --modelfile <name>` shows as being updated, however when trying to interact it seems like it's running the old modelfile.
Example: giving a model with the template `{{ .Prompt }}`, if you update the modelfile and overwrite the template the old one will stay loaded despite `ollama show --template <model>` and `/show template` saying otherwise.
I've tried to ask around on Discord without success, so I'm creating an issue. Please note that I'm running Ollama on Termux and have only done some testing, however I'm sure something's wrong.
I'll be away for a few hours, hope you guys find the issue!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2617/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8358
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8358/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8358/comments
|
https://api.github.com/repos/ollama/ollama/issues/8358/events
|
https://github.com/ollama/ollama/issues/8358
| 2,776,965,378
|
I_kwDOJ0Z1Ps6lhSEC
| 8,358
|
Add Llama-3.1-Nemotron-70B-Instruct-HF
|
{
"login": "antman1p",
"id": 6889529,
"node_id": "MDQ6VXNlcjY4ODk1Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6889529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antman1p",
"html_url": "https://github.com/antman1p",
"followers_url": "https://api.github.com/users/antman1p/followers",
"following_url": "https://api.github.com/users/antman1p/following{/other_user}",
"gists_url": "https://api.github.com/users/antman1p/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antman1p/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antman1p/subscriptions",
"organizations_url": "https://api.github.com/users/antman1p/orgs",
"repos_url": "https://api.github.com/users/antman1p/repos",
"events_url": "https://api.github.com/users/antman1p/events{/privacy}",
"received_events_url": "https://api.github.com/users/antman1p/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-09T06:48:25
| 2025-01-09T06:57:33
| 2025-01-09T06:57:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can we add https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF to the available models?
|
{
"login": "antman1p",
"id": 6889529,
"node_id": "MDQ6VXNlcjY4ODk1Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6889529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antman1p",
"html_url": "https://github.com/antman1p",
"followers_url": "https://api.github.com/users/antman1p/followers",
"following_url": "https://api.github.com/users/antman1p/following{/other_user}",
"gists_url": "https://api.github.com/users/antman1p/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antman1p/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antman1p/subscriptions",
"organizations_url": "https://api.github.com/users/antman1p/orgs",
"repos_url": "https://api.github.com/users/antman1p/repos",
"events_url": "https://api.github.com/users/antman1p/events{/privacy}",
"received_events_url": "https://api.github.com/users/antman1p/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8358/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7836
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7836/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7836/comments
|
https://api.github.com/repos/ollama/ollama/issues/7836/events
|
https://github.com/ollama/ollama/pull/7836
| 2,692,706,862
|
PR_kwDOJ0Z1Ps6DHM6r
| 7,836
|
api: (Fix) Enable Tool streaming
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-11-26T00:45:30
| 2024-12-05T22:13:34
| 2024-11-27T21:40:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7836",
"html_url": "https://github.com/ollama/ollama/pull/7836",
"diff_url": "https://github.com/ollama/ollama/pull/7836.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7836.patch",
"merged_at": "2024-11-27T21:40:58"
}
|
As much desired by the community - https://github.com/ollama/ollama/issues/5796
- We currently do not support streaming correctly and just return data in `.Content` if streaming ToolCalls
- This ends breaks patterns for other clients but also some users who want to set streaming to true and not worry about switching back and forth
- This fix streams a full tool call back instead of returning partially formed tools - which means if there are multiple tools, a full tool will be returned to the user as soon it is recognized on Ollama's side
TODO:
- [x] Tests
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7836/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7836/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6383
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6383/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6383/comments
|
https://api.github.com/repos/ollama/ollama/issues/6383/events
|
https://github.com/ollama/ollama/issues/6383
| 2,469,293,284
|
I_kwDOJ0Z1Ps6TLmzk
| 6,383
|
update to CUDA v12.2 libraries in docker container?
|
{
"login": "juancaoviedo",
"id": 44776372,
"node_id": "MDQ6VXNlcjQ0Nzc2Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/44776372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juancaoviedo",
"html_url": "https://github.com/juancaoviedo",
"followers_url": "https://api.github.com/users/juancaoviedo/followers",
"following_url": "https://api.github.com/users/juancaoviedo/following{/other_user}",
"gists_url": "https://api.github.com/users/juancaoviedo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juancaoviedo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juancaoviedo/subscriptions",
"organizations_url": "https://api.github.com/users/juancaoviedo/orgs",
"repos_url": "https://api.github.com/users/juancaoviedo/repos",
"events_url": "https://api.github.com/users/juancaoviedo/events{/privacy}",
"received_events_url": "https://api.github.com/users/juancaoviedo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-08-16T02:12:17
| 2024-08-20T21:18:40
| 2024-08-20T21:18:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I'm deploying ollama in a self-hosted kubernetes cluster using https://github.com/otwld/ollama-helm. However, when the pod starts, it is not able to find the GPUs. I have the following logs:
```python
time=2024-08-15T15:39:41.512Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-15T15:39:41.513Z level=DEBUG source=gpu.go:90 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-08-15T15:39:41.513Z level=DEBUG source=gpu.go:472 msg="Searching for GPU library" name=libcuda.so*
time=2024-08-15T15:39:41.513Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/local/nvidia/lib/libcuda.so** /usr/local/nvidia/lib64/libcuda.so** /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-08-15T15:39:41.513Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths=[]
time=2024-08-15T15:39:41.513Z level=DEBUG source=gpu.go:472 msg="Searching for GPU library" name=libcudart.so*
time=2024-08-15T15:39:41.513Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so** /tmp/ollama869795597/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2024-08-15T15:39:41.514Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths=[/tmp/ollama869795597/runners/cuda_v11/libcudart.so.11.0]
cudaSetDevice err: 35
time=2024-08-15T15:39:41.515Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/tmp/ollama869795597/runners/cuda_v11/libcudart.so.11.0 error="your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama"
time=2024-08-15T15:39:41.515Z level=DEBUG source=amd_linux.go:371 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2024-08-15T15:39:41.515Z level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
```
The info of the nvidia-smi of my system is the following:
```python
NVIDIA-SMI 535.183.01
Driver Version: 535.183.01
CUDA Version: 12.2
```
The values of the HELM chart I use are:
```python
ollama:
gpu:
# -- Enable GPU integration
enabled: true
# -- GPU type: 'nvidia' or 'amd'
# If 'ollama.gpu.enabled', default value is nvidia
# If set to 'amd', this will add 'rocm' suffix to image tag if 'image.tag' is not override
# This is due cause AMD and CPU/CUDA are different images
type: 'nvidia'
# -- Specify the number of GPU
number: 1
# -- only for nvidia cards; change to (example) 'nvidia.com/mig-1g.10gb' to use MIG slice
nvidiaResource: "nvidia.com/gpu"
# nvidiaResource: "nvidia.com/mig-1g.10gb" # example
....
extraEnv:
- name: NVIDIA_DRIVER_CAPABILITIES
value: compute, utility
- name: NVIDIA_VISIBLE_DEVICES
value: all
- name: OLLAMA_DEBUG
value: "1"
```
In this issue https://github.com/ollama/ollama/issues/2670 @dhiltgen mention the following: "CUDA v11 libraries are currently embedded within the ollama linux binary and are extracted at runtime". So, my problem might be related to compatibility of CUDA versions. I cannot downgrade the CUDA version of the cluster because other services use the GPUs as well (with CUDA 12.2).
Is a way to solve this? Should I build again the docker image with the right CUDA version? Could you guide me how to do so? Is there plans to update the CUDA version in future releases?
Thanks in advance,
|
{
"login": "juancaoviedo",
"id": 44776372,
"node_id": "MDQ6VXNlcjQ0Nzc2Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/44776372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juancaoviedo",
"html_url": "https://github.com/juancaoviedo",
"followers_url": "https://api.github.com/users/juancaoviedo/followers",
"following_url": "https://api.github.com/users/juancaoviedo/following{/other_user}",
"gists_url": "https://api.github.com/users/juancaoviedo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juancaoviedo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juancaoviedo/subscriptions",
"organizations_url": "https://api.github.com/users/juancaoviedo/orgs",
"repos_url": "https://api.github.com/users/juancaoviedo/repos",
"events_url": "https://api.github.com/users/juancaoviedo/events{/privacy}",
"received_events_url": "https://api.github.com/users/juancaoviedo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6383/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2407
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2407/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2407/comments
|
https://api.github.com/repos/ollama/ollama/issues/2407/events
|
https://github.com/ollama/ollama/issues/2407
| 2,124,420,942
|
I_kwDOJ0Z1Ps5-oBdO
| 2,407
|
Add support to internlm2-chat-20b model
|
{
"login": "online2311",
"id": 15675255,
"node_id": "MDQ6VXNlcjE1Njc1MjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/15675255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/online2311",
"html_url": "https://github.com/online2311",
"followers_url": "https://api.github.com/users/online2311/followers",
"following_url": "https://api.github.com/users/online2311/following{/other_user}",
"gists_url": "https://api.github.com/users/online2311/gists{/gist_id}",
"starred_url": "https://api.github.com/users/online2311/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/online2311/subscriptions",
"organizations_url": "https://api.github.com/users/online2311/orgs",
"repos_url": "https://api.github.com/users/online2311/repos",
"events_url": "https://api.github.com/users/online2311/events{/privacy}",
"received_events_url": "https://api.github.com/users/online2311/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-02-08T06:02:26
| 2025-01-30T00:04:58
| 2025-01-30T00:04:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If possible, could support for this model be added to ollama?
`https://huggingface.co/BoloniniD/internlm2-chat-20b-gguf `
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2407/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6021
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6021/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6021/comments
|
https://api.github.com/repos/ollama/ollama/issues/6021/events
|
https://github.com/ollama/ollama/issues/6021
| 2,433,673,981
|
I_kwDOJ0Z1Ps6RDur9
| 6,021
|
API returns 403 Forbidden when Origin http header is set
|
{
"login": "jvm123",
"id": 2043050,
"node_id": "MDQ6VXNlcjIwNDMwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jvm123",
"html_url": "https://github.com/jvm123",
"followers_url": "https://api.github.com/users/jvm123/followers",
"following_url": "https://api.github.com/users/jvm123/following{/other_user}",
"gists_url": "https://api.github.com/users/jvm123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jvm123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvm123/subscriptions",
"organizations_url": "https://api.github.com/users/jvm123/orgs",
"repos_url": "https://api.github.com/users/jvm123/repos",
"events_url": "https://api.github.com/users/jvm123/events{/privacy}",
"received_events_url": "https://api.github.com/users/jvm123/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2024-07-28T00:40:08
| 2024-10-09T09:43:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When an API call with the Origin http header is made, ollama responds with 403 Forbidden. Applications such as [chatGPTBox](https://github.com/josStorer/chatGPTBox) use this header and thereby trigger the issue.
Example command that fails:
$ curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" -H "Origin: abc" \
-d '{
"model": "llama3.1",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.2.5
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6021/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3945
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3945/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3945/comments
|
https://api.github.com/repos/ollama/ollama/issues/3945/events
|
https://github.com/ollama/ollama/pull/3945
| 2,265,845,210
|
PR_kwDOJ0Z1Ps5t2JrH
| 3,945
|
Update api.md
|
{
"login": "Darinochka",
"id": 39233990,
"node_id": "MDQ6VXNlcjM5MjMzOTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/39233990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darinochka",
"html_url": "https://github.com/Darinochka",
"followers_url": "https://api.github.com/users/Darinochka/followers",
"following_url": "https://api.github.com/users/Darinochka/following{/other_user}",
"gists_url": "https://api.github.com/users/Darinochka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Darinochka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darinochka/subscriptions",
"organizations_url": "https://api.github.com/users/Darinochka/orgs",
"repos_url": "https://api.github.com/users/Darinochka/repos",
"events_url": "https://api.github.com/users/Darinochka/events{/privacy}",
"received_events_url": "https://api.github.com/users/Darinochka/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-26T13:55:48
| 2024-05-06T21:40:05
| 2024-05-06T21:39:59
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3945",
"html_url": "https://github.com/ollama/ollama/pull/3945",
"diff_url": "https://github.com/ollama/ollama/pull/3945.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3945.patch",
"merged_at": "2024-05-06T21:39:59"
}
|
Changed the calculation of tps (token/s) in the documentation
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3945/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/704
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/704/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/704/comments
|
https://api.github.com/repos/ollama/ollama/issues/704/events
|
https://github.com/ollama/ollama/issues/704
| 1,927,019,798
|
I_kwDOJ0Z1Ps5y2_0W
| 704
|
Allow for directed dir installation
|
{
"login": "vRobM",
"id": 2704733,
"node_id": "MDQ6VXNlcjI3MDQ3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2704733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vRobM",
"html_url": "https://github.com/vRobM",
"followers_url": "https://api.github.com/users/vRobM/followers",
"following_url": "https://api.github.com/users/vRobM/following{/other_user}",
"gists_url": "https://api.github.com/users/vRobM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vRobM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vRobM/subscriptions",
"organizations_url": "https://api.github.com/users/vRobM/orgs",
"repos_url": "https://api.github.com/users/vRobM/repos",
"events_url": "https://api.github.com/users/vRobM/events{/privacy}",
"received_events_url": "https://api.github.com/users/vRobM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-04T21:38:54
| 2023-10-25T23:21:56
| 2023-10-25T23:21:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ask where the users wants it installed, with some defaults if you want to be nice.
Forcing install in /usr/local/bin is unusable in containers and other file system mapped configurations
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/704/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7414
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7414/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7414/comments
|
https://api.github.com/repos/ollama/ollama/issues/7414/events
|
https://github.com/ollama/ollama/pull/7414
| 2,622,777,157
|
PR_kwDOJ0Z1Ps6AUcS0
| 7,414
|
runner.go: Better abstract vision model integration
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-30T03:29:08
| 2024-10-30T21:53:44
| 2024-10-30T21:53:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7414",
"html_url": "https://github.com/ollama/ollama/pull/7414",
"diff_url": "https://github.com/ollama/ollama/pull/7414.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7414.patch",
"merged_at": "2024-10-30T21:53:43"
}
|
-Update mllama to take the cross attention state as embeddings in a batch, more similar to how Llava handles it. This improves integration with the input cache.
-Pass locations in a prompt for embeddings using tags similar to Llava.
-Abstract interface to vision models so the main runner accesses Clip and Mllama similarly
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7414/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2440
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2440/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2440/comments
|
https://api.github.com/repos/ollama/ollama/issues/2440/events
|
https://github.com/ollama/ollama/pull/2440
| 2,128,197,262
|
PR_kwDOJ0Z1Ps5miWf9
| 2,440
|
Add Odin Runes, a Feature-Rich Java UI for Ollama, to README
|
{
"login": "leonid20000",
"id": 26918192,
"node_id": "MDQ6VXNlcjI2OTE4MTky",
"avatar_url": "https://avatars.githubusercontent.com/u/26918192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leonid20000",
"html_url": "https://github.com/leonid20000",
"followers_url": "https://api.github.com/users/leonid20000/followers",
"following_url": "https://api.github.com/users/leonid20000/following{/other_user}",
"gists_url": "https://api.github.com/users/leonid20000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leonid20000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leonid20000/subscriptions",
"organizations_url": "https://api.github.com/users/leonid20000/orgs",
"repos_url": "https://api.github.com/users/leonid20000/repos",
"events_url": "https://api.github.com/users/leonid20000/events{/privacy}",
"received_events_url": "https://api.github.com/users/leonid20000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-02-10T07:41:15
| 2024-03-06T19:57:49
| 2024-03-06T19:57:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2440",
"html_url": "https://github.com/ollama/ollama/pull/2440",
"diff_url": "https://github.com/ollama/ollama/pull/2440.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2440.patch",
"merged_at": "2024-03-06T19:57:49"
}
|
**Description:**
Hello,
I've added Odin Runes to the README under the "Community Integrations" section. Odin Runes is a Java-based GPT client that facilitates seamless interaction with GPT models, enhancing productivity in prompt engineering and text generation tasks. This addition highlights the integration between Odin Runes and Ollama, offering users the flexibility to leverage large language models locally within their development workflow.
**Changes:**
- Added Odin Runes to the "Community Integrations" section of the README.
**Demo:**

Caption: This GIF demonstrates the integration between Odin Runes and Ollama in action.
**Context:**
This pull request addresses the need to document the integration between Odin Runes and Ollama, providing visibility to users who may benefit from the integration and fostering collaboration between our projects.
**Closing Note:**
I believe this addition will be beneficial to users and contributors alike. I'm open to any feedback or suggestions regarding the integration or the proposed README addition.
Thank you for considering my pull request.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2440/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7895
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7895/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7895/comments
|
https://api.github.com/repos/ollama/ollama/issues/7895/events
|
https://github.com/ollama/ollama/issues/7895
| 2,707,381,736
|
I_kwDOJ0Z1Ps6hX13o
| 7,895
|
QwQ configure reflexion time
|
{
"login": "alphaonex86",
"id": 778581,
"node_id": "MDQ6VXNlcjc3ODU4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/778581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alphaonex86",
"html_url": "https://github.com/alphaonex86",
"followers_url": "https://api.github.com/users/alphaonex86/followers",
"following_url": "https://api.github.com/users/alphaonex86/following{/other_user}",
"gists_url": "https://api.github.com/users/alphaonex86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alphaonex86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alphaonex86/subscriptions",
"organizations_url": "https://api.github.com/users/alphaonex86/orgs",
"repos_url": "https://api.github.com/users/alphaonex86/repos",
"events_url": "https://api.github.com/users/alphaonex86/events{/privacy}",
"received_events_url": "https://api.github.com/users/alphaonex86/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-30T13:49:39
| 2024-12-02T15:41:42
| 2024-12-02T15:41:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I wish be able to configure reflexion time/loop/step of QwQ model.
Cheers,
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7895/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/2916
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2916/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2916/comments
|
https://api.github.com/repos/ollama/ollama/issues/2916/events
|
https://github.com/ollama/ollama/issues/2916
| 2,167,209,756
|
I_kwDOJ0Z1Ps6BLP8c
| 2,916
|
How to clear chat history in the Ollama CLI interface?
|
{
"login": "doggy8088",
"id": 88981,
"node_id": "MDQ6VXNlcjg4OTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/88981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doggy8088",
"html_url": "https://github.com/doggy8088",
"followers_url": "https://api.github.com/users/doggy8088/followers",
"following_url": "https://api.github.com/users/doggy8088/following{/other_user}",
"gists_url": "https://api.github.com/users/doggy8088/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doggy8088/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doggy8088/subscriptions",
"organizations_url": "https://api.github.com/users/doggy8088/orgs",
"repos_url": "https://api.github.com/users/doggy8088/repos",
"events_url": "https://api.github.com/users/doggy8088/events{/privacy}",
"received_events_url": "https://api.github.com/users/doggy8088/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-04T16:10:01
| 2024-05-30T17:18:48
| 2024-03-04T19:25:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When chatting in the Ollama CLI interface, the previous conversation will affect the result for the further conversation. Is there a way to clear out all the previous conversations?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2916/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6193
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6193/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6193/comments
|
https://api.github.com/repos/ollama/ollama/issues/6193/events
|
https://github.com/ollama/ollama/issues/6193
| 2,449,765,885
|
I_kwDOJ0Z1Ps6SBHX9
| 6,193
|
Add New SOTA Models: Palmyra-Med and Palmyra-Fin
|
{
"login": "gileneusz",
"id": 34601970,
"node_id": "MDQ6VXNlcjM0NjAxOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gileneusz",
"html_url": "https://github.com/gileneusz",
"followers_url": "https://api.github.com/users/gileneusz/followers",
"following_url": "https://api.github.com/users/gileneusz/following{/other_user}",
"gists_url": "https://api.github.com/users/gileneusz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gileneusz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gileneusz/subscriptions",
"organizations_url": "https://api.github.com/users/gileneusz/orgs",
"repos_url": "https://api.github.com/users/gileneusz/repos",
"events_url": "https://api.github.com/users/gileneusz/events{/privacy}",
"received_events_url": "https://api.github.com/users/gileneusz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-08-06T01:52:19
| 2024-08-18T06:26:21
| 2024-08-18T06:26:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Request generated by ChatGPT cause I don't have time to write such posts and I'm lazy too. Thanks for understanding my lazy nature.
I recently came across two impressive state-of-the-art (SOTA) domain-specific models, Palmyra-Med and Palmyra-Fin, described on https://writer.com/blog/palmyra-med-fin-models/. I believe these models would be a valuable addition to the Ollama platform. Here’s a brief overview:
Palmyra-Med: https://huggingface.co/Writer/Palmyra-Med-70B-32K
Designed for medical applications.
Achieves 85.9% average across all medical benchmarks, surpassing Med-PaLM-2.
Excels in clinical knowledge (90.9% in MMLU Clinical Knowledge) and anatomy (83.7% in MMLU Anatomy).
Supports diagnostic accuracy, treatment planning, genetic counseling, and biomedical research.
Cost-effective at $10 per 1M output tokens, compared to $60 for GPT-4.
Palmyra-Fin: https://huggingface.co/Writer/Palmyra-Fin-70B-32K
Tailored for financial applications.
Supports financial trend analysis, investment analysis, risk evaluation, and asset allocation strategy.
Utilizes well-curated financial training data and fine-tuning instruction data to ensure high accuracy.
Both models integrate seamlessly with the Writer full-stack generative AI platform, offering tools like integrated graph-based RAG technology, AI guardrails, and a suite of developer tools. Available via API, No-code tools, and the Writer Framework, they come with an open-model license for easy deployment locally or in private clouds.
Adding these models to Ollama would significantly enhance the platform's capabilities in the medical and financial sectors, providing users with highly specialized and accurate AI tools.
Thank you for considering this request!
|
{
"login": "gileneusz",
"id": 34601970,
"node_id": "MDQ6VXNlcjM0NjAxOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gileneusz",
"html_url": "https://github.com/gileneusz",
"followers_url": "https://api.github.com/users/gileneusz/followers",
"following_url": "https://api.github.com/users/gileneusz/following{/other_user}",
"gists_url": "https://api.github.com/users/gileneusz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gileneusz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gileneusz/subscriptions",
"organizations_url": "https://api.github.com/users/gileneusz/orgs",
"repos_url": "https://api.github.com/users/gileneusz/repos",
"events_url": "https://api.github.com/users/gileneusz/events{/privacy}",
"received_events_url": "https://api.github.com/users/gileneusz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6193/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6193/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8603
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8603/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8603/comments
|
https://api.github.com/repos/ollama/ollama/issues/8603/events
|
https://github.com/ollama/ollama/pull/8603
| 2,812,290,292
|
PR_kwDOJ0Z1Ps6JC8y5
| 8,603
|
Update the Documentation.
|
{
"login": "kontactguddu",
"id": 49631628,
"node_id": "MDQ6VXNlcjQ5NjMxNjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/49631628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kontactguddu",
"html_url": "https://github.com/kontactguddu",
"followers_url": "https://api.github.com/users/kontactguddu/followers",
"following_url": "https://api.github.com/users/kontactguddu/following{/other_user}",
"gists_url": "https://api.github.com/users/kontactguddu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kontactguddu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kontactguddu/subscriptions",
"organizations_url": "https://api.github.com/users/kontactguddu/orgs",
"repos_url": "https://api.github.com/users/kontactguddu/repos",
"events_url": "https://api.github.com/users/kontactguddu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kontactguddu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-27T07:40:57
| 2025-01-28T08:41:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8603",
"html_url": "https://github.com/ollama/ollama/pull/8603",
"diff_url": "https://github.com/ollama/ollama/pull/8603.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8603.patch",
"merged_at": null
}
|
Deepseek model also added into the documentation section.
1. 671B
2. 70B
3. 1.5B
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8603/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4252
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4252/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4252/comments
|
https://api.github.com/repos/ollama/ollama/issues/4252/events
|
https://github.com/ollama/ollama/issues/4252
| 2,284,816,551
|
I_kwDOJ0Z1Ps6IL4in
| 4,252
|
Ollama中文社区群聊发布
|
{
"login": "zsq2010",
"id": 4374659,
"node_id": "MDQ6VXNlcjQzNzQ2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4374659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsq2010",
"html_url": "https://github.com/zsq2010",
"followers_url": "https://api.github.com/users/zsq2010/followers",
"following_url": "https://api.github.com/users/zsq2010/following{/other_user}",
"gists_url": "https://api.github.com/users/zsq2010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zsq2010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsq2010/subscriptions",
"organizations_url": "https://api.github.com/users/zsq2010/orgs",
"repos_url": "https://api.github.com/users/zsq2010/repos",
"events_url": "https://api.github.com/users/zsq2010/events{/privacy}",
"received_events_url": "https://api.github.com/users/zsq2010/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-08T06:48:54
| 2024-05-09T21:12:00
| 2024-05-09T21:11:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
QQ:
808227197

欢迎加入交流分享信息!!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4252/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7333
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7333/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7333/comments
|
https://api.github.com/repos/ollama/ollama/issues/7333/events
|
https://github.com/ollama/ollama/issues/7333
| 2,609,174,155
|
I_kwDOJ0Z1Ps6bhNaL
| 7,333
|
`OLLAMA_MODELS` env var is ignored
|
{
"login": "aliok",
"id": 376732,
"node_id": "MDQ6VXNlcjM3NjczMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/376732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aliok",
"html_url": "https://github.com/aliok",
"followers_url": "https://api.github.com/users/aliok/followers",
"following_url": "https://api.github.com/users/aliok/following{/other_user}",
"gists_url": "https://api.github.com/users/aliok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aliok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aliok/subscriptions",
"organizations_url": "https://api.github.com/users/aliok/orgs",
"repos_url": "https://api.github.com/users/aliok/repos",
"events_url": "https://api.github.com/users/aliok/events{/privacy}",
"received_events_url": "https://api.github.com/users/aliok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-10-23T16:19:42
| 2024-10-23T18:00:31
| 2024-10-23T17:16:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I want to download models to a specific directory using `OLLAMA_MODELS` env var, I see it is ignored.
https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-set-them-to-a-different-location says this:
```
If a different directory needs to be used, set the environment variable OLLAMA_MODELS to the chosen directory.
```
However, I don't see the model files in the specified directory:
```
➜ mkdir -p /tmp/foo/models
➜ ls -lah /tmp/foo/models
total 0
drwxr-xr-x 2 aliok 64 Oct 23 19:10 ./
drwxr-xr-x 3 aliok 96 Oct 23 19:10 ../
➜ export OLLAMA_MODELS="/tmp/foo/models/"
➜ ollama pull gemma2:2b
pulling manifest
pulling 7462734796d6... 100% ▕1.6 GB
pulling e0a42594d802... 100% ▕ 358 B
pulling 097a36493f71... 100% ▕8.4 KB
pulling 2490e7468436... 100% ▕ 65 B
pulling e18ad7af7efb... 100% ▕ 487 B
verifying sha256 digest
writing manifest
success
➜ ls -lah /tmp/foo/models
total 0
drwxr-xr-x 2 aliok 64 Oct 23 19:10 ./
drwxr-xr-x 4 aliok 128 Oct 23 19:13 ../
```
`pull` command doesn't list this env var anyway:
```
➜ ollama pull --help
Pull a model from a registry
Usage:
ollama pull MODEL [flags]
Flags:
-h, --help help for pull
--insecure Use an insecure registry
Environment Variables:
OLLAMA_HOST IP Address for the ollama server (default 127.0.0.1:11434)
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.14
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7333/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4125
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4125/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4125/comments
|
https://api.github.com/repos/ollama/ollama/issues/4125/events
|
https://github.com/ollama/ollama/issues/4125
| 2,277,628,691
|
I_kwDOJ0Z1Ps6HwdsT
| 4,125
|
HTTPStatusError: Client error '404 Not Found' for url 'http://127.0.0.1:11434/api/chat'
|
{
"login": "rites1095",
"id": 167276254,
"node_id": "U_kgDOCfhu3g",
"avatar_url": "https://avatars.githubusercontent.com/u/167276254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rites1095",
"html_url": "https://github.com/rites1095",
"followers_url": "https://api.github.com/users/rites1095/followers",
"following_url": "https://api.github.com/users/rites1095/following{/other_user}",
"gists_url": "https://api.github.com/users/rites1095/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rites1095/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rites1095/subscriptions",
"organizations_url": "https://api.github.com/users/rites1095/orgs",
"repos_url": "https://api.github.com/users/rites1095/repos",
"events_url": "https://api.github.com/users/rites1095/events{/privacy}",
"received_events_url": "https://api.github.com/users/rites1095/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-05-03T12:56:57
| 2024-06-17T18:04:08
| 2024-05-03T13:00:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
i am ussing weaviate db with ollama . everything is working ifne but ate the time generating response it is giving error "HTTPStatusError: Client error '404 Not Found' for url 'http://127.0.0.1:11434/api/chat'"
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.7
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4125/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3209
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3209/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3209/comments
|
https://api.github.com/repos/ollama/ollama/issues/3209/events
|
https://github.com/ollama/ollama/pull/3209
| 2,191,128,576
|
PR_kwDOJ0Z1Ps5p4k7N
| 3,209
|
Update development.md
|
{
"login": "zvrr",
"id": 194304,
"node_id": "MDQ6VXNlcjE5NDMwNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/194304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zvrr",
"html_url": "https://github.com/zvrr",
"followers_url": "https://api.github.com/users/zvrr/followers",
"following_url": "https://api.github.com/users/zvrr/following{/other_user}",
"gists_url": "https://api.github.com/users/zvrr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zvrr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zvrr/subscriptions",
"organizations_url": "https://api.github.com/users/zvrr/orgs",
"repos_url": "https://api.github.com/users/zvrr/repos",
"events_url": "https://api.github.com/users/zvrr/events{/privacy}",
"received_events_url": "https://api.github.com/users/zvrr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-18T03:02:44
| 2024-11-11T06:04:05
| 2024-11-11T06:04:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3209",
"html_url": "https://github.com/ollama/ollama/pull/3209",
"diff_url": "https://github.com/ollama/ollama/pull/3209.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3209.patch",
"merged_at": null
}
|
docs: update development.md add docker build desc
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3209/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5123
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5123/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5123/comments
|
https://api.github.com/repos/ollama/ollama/issues/5123/events
|
https://github.com/ollama/ollama/issues/5123
| 2,360,857,000
|
I_kwDOJ0Z1Ps6Mt9Go
| 5,123
|
"/api/generate"or "/api/chat always on 7m20s
|
{
"login": "srchong",
"id": 61468749,
"node_id": "MDQ6VXNlcjYxNDY4NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/61468749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srchong",
"html_url": "https://github.com/srchong",
"followers_url": "https://api.github.com/users/srchong/followers",
"following_url": "https://api.github.com/users/srchong/following{/other_user}",
"gists_url": "https://api.github.com/users/srchong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srchong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srchong/subscriptions",
"organizations_url": "https://api.github.com/users/srchong/orgs",
"repos_url": "https://api.github.com/users/srchong/repos",
"events_url": "https://api.github.com/users/srchong/events{/privacy}",
"received_events_url": "https://api.github.com/users/srchong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-18T22:35:31
| 2024-10-23T22:19:58
| 2024-10-23T22:19:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi
I have a little problem, ill try to run the model that i downloaded, but it not start.
I try on many ways:
ollama run qwen2:72b-instruct --verbose
also I try with:
```
POST http://localhost:11434/api/generate
500
7 m 25.86 s
Network
Request Headers
Content-Type: application/json
User-Agent: PostmanRuntime/7.37.3
Accept: */*
Postman-Token: 430c1907-b848-4907-87e3-6fff26d2f437
Host: localhost:11434
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Content-Length: 58
Request Body
{
"model": "qwen2:72b-instruct", "keep_alive":"30m"
}
Response Headers
Content-Type: application/json; charset=utf-8
Date: Tue, 18 Jun 2024 22:14:48 GMT
Content-Length: 74
Response Body
{"error":"timed out waiting for llama runner to start - progress 1.00 - "}
```
I execute the PS command, with this...
```
NAME ID SIZE PROCESSOR UNTIL
qwen2:72b-instruct 14066dfa503f 42 GB 92%/8% CPU/GPU 29 minutes from now
```
```
2024/06/18 16:25:09 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:-1 OLLAMA_LLM_LIBRARY:cuda_v11.3 OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:D:\\OLLAMA OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_TMPDIR:]"
time=2024-06-18T16:25:09.596-06:00 level=INFO source=images.go:725 msg="total blobs: 5"
time=2024-06-18T16:25:09.597-06:00 level=INFO source=images.go:732 msg="total unused blobs removed: 0"
time=2024-06-18T16:25:09.599-06:00 level=INFO source=routes.go:1057 msg="Listening on [::]:11434 (version 0.1.44)"
time=2024-06-18T16:25:09.599-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cpu
time=2024-06-18T16:25:09.599-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx
time=2024-06-18T16:25:09.599-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2
time=2024-06-18T16:25:09.599-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3
time=2024-06-18T16:25:09.599-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\rocm_v5.7
time=2024-06-18T16:25:09.599-06:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]"
time=2024-06-18T16:25:09.599-06:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-06-18T16:25:09.599-06:00 level=DEBUG source=sched.go:90 msg="starting llm scheduler"
time=2024-06-18T16:25:09.599-06:00 level=DEBUG source=gpu.go:132 msg="Detecting GPUs"
time=2024-06-18T16:25:09.599-06:00 level=DEBUG source=gpu.go:274 msg="Searching for GPU library" name=nvcuda.dll
time=2024-06-18T16:25:09.599-06:00 level=DEBUG source=gpu.go:293 msg="gpu library search" globs="[C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin\\nvcuda.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Program Files\\Common Files\\Oracle\\Java\\javapath\\nvcuda.dll* C:\\Program Files\\Broadcom\\Broadcom 802.11 Network Adapter\\nvcuda.dll* C:\\Windows\\system32\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\WINDOWS\\system32\\nvcuda.dll* C:\\WINDOWS\\nvcuda.dll* C:\\WINDOWS\\System32\\Wbem\\nvcuda.dll* C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.2.0\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\macki\\.dotnet\\tools\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Users\\macki\\.detaspace\\bin\\nvcuda.dll* C:\\Users\\macki\\AppData\\Roaming\\npm\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]"
time=2024-06-18T16:25:09.604-06:00 level=DEBUG source=gpu.go:298 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*"
time=2024-06-18T16:25:09.611-06:00 level=DEBUG source=gpu.go:327 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvcuda.dll C:\\WINDOWS\\system32\\nvcuda.dll]"
time=2024-06-18T16:25:09.643-06:00 level=DEBUG source=gpu.go:137 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll
time=2024-06-18T16:25:09.644-06:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-06-18T16:25:09.762-06:00 level=DEBUG source=amd_windows.go:31 msg="unable to load amdhip64.dll: The specified module could not be found."
time=2024-06-18T16:25:09.762-06:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-90dece56-f80e-a534-ac66-3f26869294d4 library=cuda compute=6.1 driver=12.5 name="NVIDIA GeForce GTX 1050" total="4.0 GiB" available="3.3 GiB"
time=2024-06-18T16:25:17.703-06:00 level=DEBUG source=gpu.go:132 msg="Detecting GPUs"
time=2024-06-18T16:25:17.703-06:00 level=DEBUG source=gpu.go:274 msg="Searching for GPU library" name=nvcuda.dll
time=2024-06-18T16:25:17.703-06:00 level=DEBUG source=gpu.go:293 msg="gpu library search" globs="[C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin\\nvcuda.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Program Files\\Common Files\\Oracle\\Java\\javapath\\nvcuda.dll* C:\\Program Files\\Broadcom\\Broadcom 802.11 Network Adapter\\nvcuda.dll* C:\\Windows\\system32\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\WINDOWS\\system32\\nvcuda.dll* C:\\WINDOWS\\nvcuda.dll* C:\\WINDOWS\\System32\\Wbem\\nvcuda.dll* C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.2.0\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\macki\\.dotnet\\tools\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Users\\macki\\.detaspace\\bin\\nvcuda.dll* C:\\Users\\macki\\AppData\\Roaming\\npm\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]"
time=2024-06-18T16:25:17.709-06:00 level=DEBUG source=gpu.go:298 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*"
time=2024-06-18T16:25:17.715-06:00 level=DEBUG source=gpu.go:327 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvcuda.dll C:\\WINDOWS\\system32\\nvcuda.dll]"
time=2024-06-18T16:25:17.715-06:00 level=DEBUG source=gpu.go:137 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll
time=2024-06-18T16:25:17.715-06:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-06-18T16:25:17.797-06:00 level=DEBUG source=amd_windows.go:31 msg="unable to load amdhip64.dll: The specified module could not be found."
time=2024-06-18T16:25:17.824-06:00 level=DEBUG source=gguf.go:57 msg="model = &llm.gguf{containerGGUF:(*llm.containerGGUF)(0xc0006a6b80), kv:llm.KV{}, tensors:[]*llm.Tensor(nil), parameters:0x0}"
time=2024-06-18T16:25:19.747-06:00 level=DEBUG source=sched.go:153 msg="loading first model" model=D:\OLLAMA\blobs\sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb
time=2024-06-18T16:25:19.748-06:00 level=DEBUG source=memory.go:44 msg=evaluating library=cuda gpu_count=1 available="3.3 GiB"
time=2024-06-18T16:25:19.749-06:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=2 memory.available="3.3 GiB" memory.required.full="39.6 GiB" memory.required.partial="3.1 GiB" memory.required.kv="640.0 MiB" memory.weights.total="37.7 GiB" memory.weights.repeating="36.8 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="313.0 MiB" memory.graph.partial="1.3 GiB"
time=2024-06-18T16:25:19.749-06:00 level=DEBUG source=memory.go:44 msg=evaluating library=cuda gpu_count=1 available="3.3 GiB"
time=2024-06-18T16:25:19.750-06:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=2 memory.available="3.3 GiB" memory.required.full="39.6 GiB" memory.required.partial="3.1 GiB" memory.required.kv="640.0 MiB" memory.weights.total="37.7 GiB" memory.weights.repeating="36.8 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="313.0 MiB" memory.graph.partial="1.3 GiB"
time=2024-06-18T16:25:19.750-06:00 level=DEBUG source=memory.go:44 msg=evaluating library=cuda gpu_count=1 available="3.3 GiB"
time=2024-06-18T16:25:19.750-06:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=2 memory.available="3.3 GiB" memory.required.full="39.6 GiB" memory.required.partial="3.1 GiB" memory.required.kv="640.0 MiB" memory.weights.total="37.7 GiB" memory.weights.repeating="36.8 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="313.0 MiB" memory.graph.partial="1.3 GiB"
time=2024-06-18T16:25:19.750-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cpu
time=2024-06-18T16:25:19.750-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx
time=2024-06-18T16:25:19.750-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2
time=2024-06-18T16:25:19.750-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3
time=2024-06-18T16:25:19.751-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\rocm_v5.7
time=2024-06-18T16:25:19.751-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cpu
time=2024-06-18T16:25:19.751-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx
time=2024-06-18T16:25:19.751-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2
time=2024-06-18T16:25:19.751-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3
time=2024-06-18T16:25:19.751-06:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\rocm_v5.7
time=2024-06-18T16:25:19.751-06:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-06-18T16:25:19.751-06:00 level=INFO source=server.go:140 msg="user override" OLLAMA_LLM_LIBRARY=cuda_v11.3 path=C:\Users\macki\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3
time=2024-06-18T16:25:19.762-06:00 level=INFO source=server.go:341 msg="starting llama server" cmd="C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\OLLAMA\\blobs\\sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 2 --verbose --parallel 1 --port 64688"
time=2024-06-18T16:25:19.762-06:00 level=DEBUG source=server.go:356 msg=subprocess environment="[CUDA_PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5 CUDA_PATH_V12_5=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5 PATH=C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama;C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp;;C:\\Program Files\\Common Files\\Oracle\\Java\\javapath;C:\\Program Files\\Broadcom\\Broadcom 802.11 Network Adapter;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\dotnet\\;D:\\windows\\installations\\nodejs;C:\\Program Files\\nodejs;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\Git\\cmd;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.2.0\\;C:\\Users\\macki\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\macki\\AppData\\Local\\Programs\\Microsoft VS Code\\bin;C:\\Users\\macki\\.dotnet\\tools;D:\\windows\\installations\\nodejs;C:\\Users\\macki\\.detaspace\\bin;C:\\Users\\macki\\AppData\\Roaming\\npm;D:\\windows\\installations\\nodejs;C:\\Program Files\\nodejs;C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama CUDA_VISIBLE_DEVICES=GPU-90dece56-f80e-a534-ac66-3f26869294d4]"
time=2024-06-18T16:25:19.766-06:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-06-18T16:25:19.766-06:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding"
time=2024-06-18T16:25:19.767-06:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3051 commit="5921b8f0" tid="1520" timestamp=1718749519
INFO [wmain] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="1520" timestamp=1718749519 total_threads=8
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="64688" tid="1520" timestamp=1718749519
llama_model_loader: loaded meta data with 21 key-value pairs and 963 tensors from D:\OLLAMA\blobs\sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-72B-Instruct
llama_model_loader: - kv 2: qwen2.block_count u32 = 80
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 8192
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 29568
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 64
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 401 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-06-18T16:25:20.030-06:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 421
llm_load_vocab: token to piece cache size = 1.8703 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 29568
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 72.71 B
llm_load_print_meta: model size = 38.39 GiB (4.54 BPW)
llm_load_print_meta: general.name = Qwen2-72B-Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 1050, compute capability 6.1, VMM: yes
llm_load_tensors: ggml ctx size = 0.92 MiB
llm_load_tensors: offloading 2 repeating layers to GPU
llm_load_tensors: offloaded 2/81 layers to GPU
llm_load_tensors: CPU buffer size = 39315.94 MiB
llm_load_tensors: CUDA0 buffer size = 941.83 MiB
time=2024-06-18T16:27:36.460-06:00 level=DEBUG source=server.go:578 msg="model load progress 0.98"
time=2024-06-18T16:27:40.277-06:00 level=DEBUG source=server.go:578 msg="model load progress 0.99"
time=2024-06-18T16:27:43.034-06:00 level=DEBUG source=server.go:578 msg="model load progress 1.00"
time=2024-06-18T16:27:43.314-06:00 level=DEBUG source=server.go:581 msg="model load completed, waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 624.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 16.00 MiB
llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.61 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 1287.53 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB
llama_new_context_with_model: graph nodes = 2806
llama_new_context_with_model: graph splits = 1096
time=2024-06-18T16:32:43.435-06:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 1.00 - "
time=2024-06-18T16:32:43.435-06:00 level=DEBUG source=sched.go:347 msg="triggering expiration for failed load" model=D:\OLLAMA\blobs\sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb
time=2024-06-18T16:32:43.435-06:00 level=DEBUG source=sched.go:258 msg="runner expired event received" modelPath=D:\OLLAMA\blobs\sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb
time=2024-06-18T16:32:43.435-06:00 level=DEBUG source=sched.go:274 msg="got lock to unload" modelPath=D:\OLLAMA\blobs\sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb
time=2024-06-18T16:32:43.436-06:00 level=DEBUG source=server.go:990 msg="stopping llama server"
time=2024-06-18T16:32:43.436-06:00 level=DEBUG source=server.go:996 msg="waiting for llama server to exit"
[GIN] 2024/06/18 - 16:32:43 | 500 | 7m25s | ::1 | POST "/api/generate"
time=2024-06-18T16:32:45.849-06:00 level=DEBUG source=server.go:1000 msg="llama server stopped"
time=2024-06-18T16:32:45.849-06:00 level=DEBUG source=sched.go:279 msg="runner released" modelPath=D:\OLLAMA\blobs\sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb
time=2024-06-18T16:32:45.849-06:00 level=DEBUG source=sched.go:283 msg="sending an unloaded event" modelPath=D:\OLLAMA\blobs\sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb
time=2024-06-18T16:32:45.849-06:00 level=DEBUG source=sched.go:206 msg="ignoring unload event with no pending requests"
```
```
I put OLLAMA_KEEP_ALIVE:-1 in many ways and also with post curl, but not success...
```
Can you help me please ?
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.44
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5123/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6836
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6836/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6836/comments
|
https://api.github.com/repos/ollama/ollama/issues/6836/events
|
https://github.com/ollama/ollama/issues/6836
| 2,530,204,886
|
I_kwDOJ0Z1Ps6Wz9zW
| 6,836
|
CUDA error
|
{
"login": "harshallakare",
"id": 37395949,
"node_id": "MDQ6VXNlcjM3Mzk1OTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/37395949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harshallakare",
"html_url": "https://github.com/harshallakare",
"followers_url": "https://api.github.com/users/harshallakare/followers",
"following_url": "https://api.github.com/users/harshallakare/following{/other_user}",
"gists_url": "https://api.github.com/users/harshallakare/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harshallakare/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harshallakare/subscriptions",
"organizations_url": "https://api.github.com/users/harshallakare/orgs",
"repos_url": "https://api.github.com/users/harshallakare/repos",
"events_url": "https://api.github.com/users/harshallakare/events{/privacy}",
"received_events_url": "https://api.github.com/users/harshallakare/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-09-17T06:45:52
| 2024-09-25T20:55:22
| 2024-09-25T20:54:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm getting this strange error. It was working fine untill last couple of days and now i'm getting this error.
root@vm01:/var/log# ollama run gemma2:27b
Error: llama runner process has terminated: CUDA error: unspecified launch failure
current device: 0, in function ggml_cuda_compute_forward at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2326
err
/go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:102: CUDA error
root@vm01:/var/log#
I also obsereved that , sometime it gave me output but get terminated in between.
root@vm01:/var/log# nvidia-smi
Tue Sep 17 12:15:02 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.85.02 Driver Version: 510.85.02 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GRID A100D-40C On | 00000000:06:00.0 On | N/A |
| N/A N/A P0 N/A / N/A | 3096MiB / 40960MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1406406 G /usr/lib/xorg/Xorg 90MiB |
| 0 N/A N/A 3276533 G /usr/lib/xorg/Xorg 93MiB |
| 0 N/A N/A 3276582 G /usr/bin/gnome-shell 43MiB |
| 0 N/A N/A 4047176 C uwsgi 2837MiB |
+-----------------------------------------------------------------------------+
root@vm01:/var/log#
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6836/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2001
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2001/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2001/comments
|
https://api.github.com/repos/ollama/ollama/issues/2001/events
|
https://github.com/ollama/ollama/issues/2001
| 2,081,768,512
|
I_kwDOJ0Z1Ps58FURA
| 2,001
|
[feature request] ollama website API? or maybe docs for it...
|
{
"login": "AlizerUncaged",
"id": 86959368,
"node_id": "MDQ6VXNlcjg2OTU5MzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/86959368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlizerUncaged",
"html_url": "https://github.com/AlizerUncaged",
"followers_url": "https://api.github.com/users/AlizerUncaged/followers",
"following_url": "https://api.github.com/users/AlizerUncaged/following{/other_user}",
"gists_url": "https://api.github.com/users/AlizerUncaged/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlizerUncaged/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlizerUncaged/subscriptions",
"organizations_url": "https://api.github.com/users/AlizerUncaged/orgs",
"repos_url": "https://api.github.com/users/AlizerUncaged/repos",
"events_url": "https://api.github.com/users/AlizerUncaged/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlizerUncaged/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-01-15T10:48:09
| 2024-05-10T01:01:40
| 2024-05-10T01:01:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
is it possible we have a way of fetching/searching models list from the official ollama website? so that other programs can integrate into it as well? if it already exists, maybe a small docs about it.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2001/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8596
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8596/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8596/comments
|
https://api.github.com/repos/ollama/ollama/issues/8596/events
|
https://github.com/ollama/ollama/issues/8596
| 2,811,682,370
|
I_kwDOJ0Z1Ps6nlt5C
| 8,596
|
Ollama on WSL2 detects GPU but timesout when running inference
|
{
"login": "rz1027",
"id": 53318196,
"node_id": "MDQ6VXNlcjUzMzE4MTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/53318196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rz1027",
"html_url": "https://github.com/rz1027",
"followers_url": "https://api.github.com/users/rz1027/followers",
"following_url": "https://api.github.com/users/rz1027/following{/other_user}",
"gists_url": "https://api.github.com/users/rz1027/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rz1027/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rz1027/subscriptions",
"organizations_url": "https://api.github.com/users/rz1027/orgs",
"repos_url": "https://api.github.com/users/rz1027/repos",
"events_url": "https://api.github.com/users/rz1027/events{/privacy}",
"received_events_url": "https://api.github.com/users/rz1027/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 5
| 2025-01-26T17:21:17
| 2025-01-28T04:38:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am using ManjaroWSL [https://github.com/sileshn/ManjaroWSL2] on Windows 11, ollama runs fine on WSL, detects my Nvidia 4070 on its start.
The thing is when I load a model and run it, I am facing this error:
`gpu VRAM usage didn't recover within timeout`
and it should that the process is offloaded to the CPU.
I had to install Ollama on the windows side, migrate all my models, and use Ollama API hosted on Windows side to use the GPU.
I also had several people in my team reporting the same problem.
Models I saw this problem with : llava:13b, it runs lightning fast on the windows side, but too slow on linux.
```
nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 561.09 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4070 ... On | 00000000:01:00.0 On | N/A |
| N/A 57C P0 27W / 105W | 7390MiB / 8188MiB | 42% Default |
| | | N/A |
```
### OS
WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8596/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2712
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2712/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2712/comments
|
https://api.github.com/repos/ollama/ollama/issues/2712/events
|
https://github.com/ollama/ollama/issues/2712
| 2,151,472,386
|
I_kwDOJ0Z1Ps6APN0C
| 2,712
|
Consider model descriptions in ollama.com search
|
{
"login": "shouryan01",
"id": 32345320,
"node_id": "MDQ6VXNlcjMyMzQ1MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/32345320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shouryan01",
"html_url": "https://github.com/shouryan01",
"followers_url": "https://api.github.com/users/shouryan01/followers",
"following_url": "https://api.github.com/users/shouryan01/following{/other_user}",
"gists_url": "https://api.github.com/users/shouryan01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shouryan01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shouryan01/subscriptions",
"organizations_url": "https://api.github.com/users/shouryan01/orgs",
"repos_url": "https://api.github.com/users/shouryan01/repos",
"events_url": "https://api.github.com/users/shouryan01/events{/privacy}",
"received_events_url": "https://api.github.com/users/shouryan01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-02-23T17:01:02
| 2024-05-03T23:21:04
| 2024-05-03T23:21:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The search only works on the title of the model card. For example, if I wanted to see the Gemma models searching for "Google" doesn't do anything.
The expected behavior would be that searching for Google would show me the Gemma model tab. The description preview has the word "Google" in it.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2712/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2712/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8256
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8256/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8256/comments
|
https://api.github.com/repos/ollama/ollama/issues/8256/events
|
https://github.com/ollama/ollama/issues/8256
| 2,761,001,798
|
I_kwDOJ0Z1Ps6kkYtG
| 8,256
|
Ollama is not using intel GPU on mac
|
{
"login": "DevAdalat",
"id": 101609870,
"node_id": "U_kgDOBg5xjg",
"avatar_url": "https://avatars.githubusercontent.com/u/101609870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DevAdalat",
"html_url": "https://github.com/DevAdalat",
"followers_url": "https://api.github.com/users/DevAdalat/followers",
"following_url": "https://api.github.com/users/DevAdalat/following{/other_user}",
"gists_url": "https://api.github.com/users/DevAdalat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DevAdalat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DevAdalat/subscriptions",
"organizations_url": "https://api.github.com/users/DevAdalat/orgs",
"repos_url": "https://api.github.com/users/DevAdalat/repos",
"events_url": "https://api.github.com/users/DevAdalat/events{/privacy}",
"received_events_url": "https://api.github.com/users/DevAdalat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-12-27T15:20:57
| 2024-12-29T12:30:50
| 2024-12-29T12:30:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi, I just now installed ollama in my mac but it not using GPU i have intel processor mac and a small built-in GPU. Can you tell me what is the issue here.
<img width="374" alt="Screenshot 2024-12-27 at 8 46 30 PM" src="https://github.com/user-attachments/assets/2d70913f-2cbe-430a-8279-d02fd503312a" />
### OS
macOS
### GPU
Intel
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "DevAdalat",
"id": 101609870,
"node_id": "U_kgDOBg5xjg",
"avatar_url": "https://avatars.githubusercontent.com/u/101609870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DevAdalat",
"html_url": "https://github.com/DevAdalat",
"followers_url": "https://api.github.com/users/DevAdalat/followers",
"following_url": "https://api.github.com/users/DevAdalat/following{/other_user}",
"gists_url": "https://api.github.com/users/DevAdalat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DevAdalat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DevAdalat/subscriptions",
"organizations_url": "https://api.github.com/users/DevAdalat/orgs",
"repos_url": "https://api.github.com/users/DevAdalat/repos",
"events_url": "https://api.github.com/users/DevAdalat/events{/privacy}",
"received_events_url": "https://api.github.com/users/DevAdalat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8256/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.