url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4503
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4503/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4503/comments
|
https://api.github.com/repos/ollama/ollama/issues/4503/events
|
https://github.com/ollama/ollama/issues/4503
| 2,303,361,162
|
I_kwDOJ0Z1Ps6JSoCK
| 4,503
|
Ollama create fails when using a utf16 Modelfile
|
{
"login": "dehlong",
"id": 112163027,
"node_id": "U_kgDOBq940w",
"avatar_url": "https://avatars.githubusercontent.com/u/112163027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dehlong",
"html_url": "https://github.com/dehlong",
"followers_url": "https://api.github.com/users/dehlong/followers",
"following_url": "https://api.github.com/users/dehlong/following{/other_user}",
"gists_url": "https://api.github.com/users/dehlong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dehlong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dehlong/subscriptions",
"organizations_url": "https://api.github.com/users/dehlong/orgs",
"repos_url": "https://api.github.com/users/dehlong/repos",
"events_url": "https://api.github.com/users/dehlong/events{/privacy}",
"received_events_url": "https://api.github.com/users/dehlong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 22
| 2024-05-17T18:35:36
| 2024-12-13T23:37:39
| 2024-05-20T18:26:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello,
I try to create a new model and mo matter what the model file is, 90% of the time I get:
Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"
Is there any solution to this?
This is my modelfile:
FROM llama3
PARAMETER temperature 1
PARAMETER num_ctx 4096
SYSTEM You are Mario from super mario bros, acting as an assistant.
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.1.38
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4503/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2482
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2482/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2482/comments
|
https://api.github.com/repos/ollama/ollama/issues/2482/events
|
https://github.com/ollama/ollama/pull/2482
| 2,133,232,444
|
PR_kwDOJ0Z1Ps5mzPR0
| 2,482
|
add support for json files and to allow for more than 41666 embeddings
|
{
"login": "donbr",
"id": 7340008,
"node_id": "MDQ6VXNlcjczNDAwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7340008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donbr",
"html_url": "https://github.com/donbr",
"followers_url": "https://api.github.com/users/donbr/followers",
"following_url": "https://api.github.com/users/donbr/following{/other_user}",
"gists_url": "https://api.github.com/users/donbr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donbr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donbr/subscriptions",
"organizations_url": "https://api.github.com/users/donbr/orgs",
"repos_url": "https://api.github.com/users/donbr/repos",
"events_url": "https://api.github.com/users/donbr/events{/privacy}",
"received_events_url": "https://api.github.com/users/donbr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-13T22:31:31
| 2024-11-21T03:03:31
| 2024-11-21T03:03:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2482",
"html_url": "https://github.com/ollama/ollama/pull/2482",
"diff_url": "https://github.com/ollama/ollama/pull/2482.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2482.patch",
"merged_at": null
}
|
Added support for support for json files and larger batch size based on embedding limitations. Ran into issues with syntax of JSONLoader arguments so went with TextLoader for now.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2482/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5350
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5350/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5350/comments
|
https://api.github.com/repos/ollama/ollama/issues/5350/events
|
https://github.com/ollama/ollama/issues/5350
| 2,379,405,545
|
I_kwDOJ0Z1Ps6N0tjp
| 5,350
|
Gemma 2 9B cannot run
|
{
"login": "Forevery1",
"id": 19872771,
"node_id": "MDQ6VXNlcjE5ODcyNzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19872771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Forevery1",
"html_url": "https://github.com/Forevery1",
"followers_url": "https://api.github.com/users/Forevery1/followers",
"following_url": "https://api.github.com/users/Forevery1/following{/other_user}",
"gists_url": "https://api.github.com/users/Forevery1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Forevery1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Forevery1/subscriptions",
"organizations_url": "https://api.github.com/users/Forevery1/orgs",
"repos_url": "https://api.github.com/users/Forevery1/repos",
"events_url": "https://api.github.com/users/Forevery1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Forevery1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-06-28T01:59:29
| 2024-07-03T16:42:09
| 2024-06-29T14:21:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
<img width="732" alt="image" src="https://github.com/ollama/ollama/assets/19872771/e28dac56-9a8b-4310-84d3-97bf3b2594f4">
### OS
Ubuntu 22.04.4 LTS
### GPU
Nvidia 4060
### CPU
Intel
### Ollama version
0.1.47
|
{
"login": "Forevery1",
"id": 19872771,
"node_id": "MDQ6VXNlcjE5ODcyNzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19872771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Forevery1",
"html_url": "https://github.com/Forevery1",
"followers_url": "https://api.github.com/users/Forevery1/followers",
"following_url": "https://api.github.com/users/Forevery1/following{/other_user}",
"gists_url": "https://api.github.com/users/Forevery1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Forevery1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Forevery1/subscriptions",
"organizations_url": "https://api.github.com/users/Forevery1/orgs",
"repos_url": "https://api.github.com/users/Forevery1/repos",
"events_url": "https://api.github.com/users/Forevery1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Forevery1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5350/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5350/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/802
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/802/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/802/comments
|
https://api.github.com/repos/ollama/ollama/issues/802/events
|
https://github.com/ollama/ollama/issues/802
| 1,944,932,035
|
I_kwDOJ0Z1Ps5z7U7D
| 802
|
Relative API link in the readme doesn't work
|
{
"login": "richawo",
"id": 35015261,
"node_id": "MDQ6VXNlcjM1MDE1MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/35015261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richawo",
"html_url": "https://github.com/richawo",
"followers_url": "https://api.github.com/users/richawo/followers",
"following_url": "https://api.github.com/users/richawo/following{/other_user}",
"gists_url": "https://api.github.com/users/richawo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richawo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richawo/subscriptions",
"organizations_url": "https://api.github.com/users/richawo/orgs",
"repos_url": "https://api.github.com/users/richawo/repos",
"events_url": "https://api.github.com/users/richawo/events{/privacy}",
"received_events_url": "https://api.github.com/users/richawo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-16T10:51:26
| 2023-10-25T23:22:13
| 2023-10-25T23:22:12
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It opens up:
https://github.com/jmorganca/docs/api.md
Rather than:
https://github.com/jmorganca/ollama/blob/main/docs/api.md
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/802/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7081
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7081/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7081/comments
|
https://api.github.com/repos/ollama/ollama/issues/7081/events
|
https://github.com/ollama/ollama/issues/7081
| 2,562,164,972
|
I_kwDOJ0Z1Ps6Yt4js
| 7,081
|
Ollama performs *much* slower via API than CLI on M1 Mac
|
{
"login": "bigxalx",
"id": 511330,
"node_id": "MDQ6VXNlcjUxMTMzMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/511330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigxalx",
"html_url": "https://github.com/bigxalx",
"followers_url": "https://api.github.com/users/bigxalx/followers",
"following_url": "https://api.github.com/users/bigxalx/following{/other_user}",
"gists_url": "https://api.github.com/users/bigxalx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bigxalx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigxalx/subscriptions",
"organizations_url": "https://api.github.com/users/bigxalx/orgs",
"repos_url": "https://api.github.com/users/bigxalx/repos",
"events_url": "https://api.github.com/users/bigxalx/events{/privacy}",
"received_events_url": "https://api.github.com/users/bigxalx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-10-02T16:45:09
| 2024-10-04T09:48:01
| 2024-10-03T19:45:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
### CLI
When I run **codestral:22b-v0.1-q2_K** on my M1 Macbook Air via the CLI with `ollama run codestral:22b-v0.1-q2_K` it performs a little slowly, but usable. When I look at `ollama ps` it prints the following:
| NAME | ID | SIZE | PROCESSOR | UNTIL |
|--------------------------|---------------|---------|------------|---------------------|
| codestral:22b-v0.1-q2_K | 0e1127d332ef | 9.6 GB | 100% GPU | 4 minutes from now |
### API
However, when i configure [Continue](https://continue.dev) to use the same model via ollama it is _much_ slower (At least 2-5X slower) and uses much more RAM
| NAME | ID | SIZE | PROCESSOR | UNTIL |
|--------------------------|---------------|---------|--------------------|--------------|
| codestral:22b-v0.1-q2_K | 0e1127d332ef | 19 GB | 43%/57% CPU/GPU | Stopping... |
---
This happens regardless of whether I start ollama with `ollama serve` or via the Mac app.
EDIT: I just tried Llama3.2 on the CLI and with [Enchanted LLM](https://github.com/AugustDev/enchanted). It seemingly confirms that the problem might be with the API, as it's a different model, different app, but I experience same problem: It runs about 2-3X slower via the API than when I ask "directly" via `ollama run...`
EDIT2: I tested Llama3.2 and Codestral with LM Studio as backend. When i run the models directly (via GUI in this case), LM Studio is a bit slower than Ollama. When I run via API / Server, LM Studio behaves the same way as with the GUI (meaning the API calls are much faster than ollama)
EDIT3: I did some more research, testing the API with different parameters. I found that setting stream to false makes the API behave like the CLI: E.g. curl ```http://localhost:11434/api/generate -d '{
"model": "codestral:22b-v0.1-q2_K",
"prompt": "Tell me about yourself",
"stream": false
}'```
Same applies to Llama3.2 model
Difference between "stream: false" and "stream:true" (default), is astounding. 4GB RAM usage vs 25GB for Llama 3.2
<summary>
Here are the server startup logs in case they are relevant:
</summary>
<details>
2024/10/02 16:59:03 routes.go:1153: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/big/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: http_proxy: https_proxy: no_proxy:]"
time=2024-10-02T16:59:03.500+02:00 level=INFO source=images.go:753 msg="total blobs: 16"
time=2024-10-02T16:59:03.500+02:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-10-02T16:59:03.501+02:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)"
time=2024-10-02T16:59:03.501+02:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/var/folders/mb/gh2_kwk50611j_9p8_0pmkn00000gn/T/ollama151879657/runners
time=2024-10-02T16:59:03.537+02:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[metal]
time=2024-10-02T16:59:03.572+02:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="10.7 GiB" available="10.7 GiB"
time=2024-10-02T16:59:16.411+02:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/big/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e gpu=0 parallel=4 available=11453251584 required="3.6 GiB"
time=2024-10-02T16:59:16.411+02:00 level=INFO source=server.go:103 msg="system memory" total="16.0 GiB" free="11.7 GiB" free_swap="0 B"
time=2024-10-02T16:59:16.411+02:00 level=INFO source=memory.go:326 msg="offload to metal" layers.requested=-1 layers.model=31 layers.offload=31 layers.split="" memory.available="[10.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.6 GiB" memory.required.partial="3.6 GiB" memory.required.kv="480.0 MiB" memory.required.allocations="[3.6 GiB]" memory.weights.total="2.0 GiB" memory.weights.repeating="1.9 GiB" memory.weights.nonrepeating="81.0 MiB" memory.graph.full="960.0 MiB" memory.graph.partial="960.0 MiB"
time=2024-10-02T16:59:16.412+02:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/var/folders/mb/gh2_kwk50611j_9p8_0pmkn00000gn/T/ollama151879657/runners/metal/ollama_llama_server --model /Users/big/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 31 --parallel 4 --port 60403"
time=2024-10-02T16:59:16.452+02:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-10-02T16:59:16.453+02:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
time=2024-10-02T16:59:16.454+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=3670 commit="194ef086" tid="0x2046d4f40" timestamp=1727881156
INFO [main] system info | n_threads=4 n_threads_batch=4 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x2046d4f40" timestamp=1727881156 total_threads=8
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="60403" tid="0x2046d4f40" timestamp=1727881156
llama_model_loader: loaded meta data with 19 key-value pairs and 483 tensors from /Users/big/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = starcoder2
llama_model_loader: - kv 1: general.name str = starcoder2-3b
llama_model_loader: - kv 2: starcoder2.block_count u32 = 30
llama_model_loader: - kv 3: starcoder2.context_length u32 = 16384
llama_model_loader: - kv 4: starcoder2.embedding_length u32 = 3072
llama_model_loader: - kv 5: starcoder2.feed_forward_length u32 = 12288
llama_model_loader: - kv 6: starcoder2.attention.head_count u32 = 24
llama_model_loader: - kv 7: starcoder2.attention.head_count_kv u32 = 2
llama_model_loader: - kv 8: starcoder2.rope.freq_base f32 = 999999.437500
llama_model_loader: - kv 9: starcoder2.attention.layer_norm_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,49152] = ["<|endoftext|>", "<fim_prefix>", "<f...
llama_model_loader: - kv 13: tokenizer.ggml.token_type arr[i32,49152] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv 14: tokenizer.ggml.merges arr[str,48872] = ["Ġ Ġ", "ĠĠ ĠĠ", "ĠĠĠĠ ĠĠ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 0
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 0
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - type f32: 302 tensors
llama_model_loader: - type q4_0: 181 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special tokens cache size = 38
llm_load_vocab: token to piece cache size = 0.2828 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = starcoder2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 49152
llm_load_print_meta: n_merges = 48872
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 16384
llm_load_print_meta: n_embd = 3072
llm_load_print_meta: n_layer = 30
llm_load_print_meta: n_head = 24
llm_load_print_meta: n_head_kv = 2
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 12
llm_load_print_meta: n_embd_k_gqa = 256
llm_load_print_meta: n_embd_v_gqa = 256
llm_load_print_meta: f_norm_eps = 1.0e-05
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 12288
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 999999.4
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 16384
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 3B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 3.03 B
llm_load_print_meta: model size = 1.59 GiB (4.51 BPW)
llm_load_print_meta: general.name = starcoder2-3b
llm_load_print_meta: BOS token = 0 '<|endoftext|>'
llm_load_print_meta: EOS token = 0 '<|endoftext|>'
llm_load_print_meta: UNK token = 0 '<|endoftext|>'
llm_load_print_meta: LF token = 164 'Ä'
llm_load_print_meta: EOT token = 0 '<|endoftext|>'
llm_load_print_meta: max token length = 512
llm_load_tensors: ggml ctx size = 0.40 MiB
ggml_backend_metal_log_allocated_size: allocated buffer, size = 1629.03 MiB, ( 1629.09 / 10922.67)
llm_load_tensors: offloading 30 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 31/31 layers to GPU
llm_load_tensors: CPU buffer size = 81.00 MiB
llm_load_tensors: Metal buffer size = 1629.02 MiB
llama_new_context_with_model: n_ctx = 16384
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 999999.4
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1
ggml_metal_init: picking default device: Apple M1
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name: Apple M1
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
llama_kv_cache_init: Metal KV buffer size = 480.00 MiB
llama_new_context_with_model: KV self size = 480.00 MiB, K (f16): 240.00 MiB, V (f16): 240.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.80 MiB
llama_new_context_with_model: Metal compute buffer size = 824.00 MiB
llama_new_context_with_model: CPU compute buffer size = 38.01 MiB
llama_new_context_with_model: graph nodes = 1147
llama_new_context_with_model: graph splits = 2
time=2024-10-02T16:59:16.705+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
INFO [main] model loaded | tid="0x2046d4f40" timestamp=1727881158
time=2024-10-02T16:59:18.968+02:00 level=INFO source=server.go:626 msg="llama runner started in 2.51 seconds"
[GIN] 2024/10/02 - 16:59:19 | 200 | 3.135537834s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/10/02 - 16:59:27 | 200 | 1.585210792s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/10/02 - 16:59:28 | 200 | 1.736311083s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/10/02 - 16:59:29 | 200 | 2.43617475s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/10/02 - 16:59:30 | 200 | 1.321447333s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/10/02 - 16:59:35 | 200 | 1.341725042s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/10/02 - 16:59:36 | 200 | 1.361396667s | 127.0.0.1 | POST "/api/generate"
time=2024-10-02T16:59:46.755+02:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=0 library=metal total="10.7 GiB" available="7.1 GiB"
time=2024-10-02T16:59:46.790+02:00 level=INFO source=server.go:103 msg="system memory" total="16.0 GiB" free="11.5 GiB" free_swap="0 B"
time=2024-10-02T16:59:46.791+02:00 level=INFO source=memory.go:326 msg="offload to metal" layers.requested=-1 layers.model=57 layers.offload=26 layers.split="" memory.available="[10.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="18.5 GiB" memory.required.partial="10.6 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[10.6 GiB]" memory.weights.total="14.5 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="3.1 GiB" memory.graph.partial="3.1 GiB"
time=2024-10-02T16:59:46.791+02:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/var/folders/mb/gh2_kwk50611j_9p8_0pmkn00000gn/T/ollama151879657/runners/metal/ollama_llama_server --model /Users/big/.ollama/models/blobs/sha256-a645a2a1d407b876edf4731dd223cf8a09fa168efc96b20496a89bcdf702f7b4 --ctx-size 32768 --batch-size 512 --embedding --log-disable --n-gpu-layers 26 --no-mmap --parallel 1 --port 60468"
time=2024-10-02T16:59:46.793+02:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-10-02T16:59:46.793+02:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
time=2024-10-02T16:59:46.793+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=3670 commit="194ef086" tid="0x2046d4f40" timestamp=1727881186
INFO [main] system info | n_threads=4 n_threads_batch=4 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x2046d4f40" timestamp=1727881186 total_threads=8
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="60468" tid="0x2046d4f40" timestamp=1727881186
llama_model_loader: loaded meta data with 25 key-value pairs and 507 tensors from /Users/big/.ollama/models/blobs/sha256-a645a2a1d407b876edf4731dd223cf8a09fa168efc96b20496a89bcdf702f7b4 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Codestral-22B-v0.1
llama_model_loader: - kv 2: llama.block_count u32 = 56
llama_model_loader: - kv 3: llama.context_length u32 = 32768
llama_model_loader: - kv 4: llama.embedding_length u32 = 6144
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 16384
llama_model_loader: - kv 6: llama.attention.head_count u32 = 48
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 10
llama_model_loader: - kv 11: llama.vocab_size u32 = 32768
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.add_space_prefix bool = true
llama_model_loader: - kv 14: tokenizer.ggml.model str = llama
llama_model_loader: - kv 15: tokenizer.ggml.pre str = default
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32768] = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32768] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32768] = [2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: general.quantization_version u32 = 2
llama_model_loader: - type f32: 113 tensors
llama_model_loader: - type q2_K: 225 tensors
llama_model_loader: - type q3_K: 112 tensors
llama_model_loader: - type q4_K: 56 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens cache size = 771
llm_load_vocab: token to piece cache size = 0.1731 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32768
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 6144
llm_load_print_meta: n_layer = 56
llm_load_print_meta: n_head = 48
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 6
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 16384
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q2_K - Medium
llm_load_print_meta: model params = 22.25 B
llm_load_print_meta: model size = 7.70 GiB (2.97 BPW)
llm_load_print_meta: general.name = Codestral-22B-v0.1
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 781 '<0x0A>'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size = 0.47 MiB
llm_load_tensors: offloading 26 repeating layers to GPU
llm_load_tensors: offloaded 26/57 layers to GPU
llm_load_tensors: CPU buffer size = 4328.18 MiB
llm_load_tensors: Metal buffer size = 3559.97 MiB
time=2024-10-02T16:59:47.045+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 32768
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1
ggml_metal_init: picking default device: Apple M1
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name: Apple M1
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
llama_kv_cache_init: CPU KV buffer size = 3840.00 MiB
llama_kv_cache_init: Metal KV buffer size = 3328.00 MiB
llama_new_context_with_model: KV self size = 7168.00 MiB, K (f16): 3584.00 MiB, V (f16): 3584.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.15 MiB
llama_new_context_with_model: Metal compute buffer size = 3184.00 MiB
llama_new_context_with_model: CPU compute buffer size = 3184.01 MiB
llama_new_context_with_model: graph nodes = 1798
llama_new_context_with_model: graph splits = 483
^Ctime=2024-10-02T17:00:05.986+02:00 level=WARN source=server.go:594 msg="client connection closed before server finished loading, aborting load"
time=2024-10-02T17:00:05.987+02:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
[GIN] 2024/10/02 - 17:00:05 | 499 | 19.265245125s | 127.0.0.1 | POST "/api/generate"
</details>
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.12
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7081/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/447
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/447/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/447/comments
|
https://api.github.com/repos/ollama/ollama/issues/447/events
|
https://github.com/ollama/ollama/issues/447
| 1,875,408,937
|
I_kwDOJ0Z1Ps5vyHgp
| 447
|
commit 8bbff2df986629e5481547e913ab4de0245afb37 stops "ollama ls" from working here
|
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-08-31T12:14:51
| 2023-09-04T08:27:27
| 2023-09-04T08:27:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
`go generate ./... && go build . && ./ollama ls` worked fine for previous versions, but does not work for the latest commit.
Using `git bisect`, the commit that creates this problem seems to be 8bbff2df986629e5481547e913ab4de0245afb37 (from the 28th of Aug).
Here is the error message for when it is not working:
```
panic: runtime error: slice bounds out of range [:12] with length 0
goroutine 1 [running]:
github.com/jmorganca/ollama/cmd.ListHandler(0x140004a8200?, {0x1052159e0, 0x0, 0x104a71539?})
/Users/username/clones/ollama/cmd/cmd.go:199 +0x4e8
github.com/spf13/cobra.(*Command).execute(0x1400045bb00, {0x1052159e0, 0x0, 0x0})
/Users/username/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x658
github.com/spf13/cobra.(*Command).ExecuteC(0x1400045a900)
/Users/username/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x320
github.com/spf13/cobra.(*Command).Execute(...)
/Users/username/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
/Users/username/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
/Users/username/clones/ollama/main.go:11 +0x54
```
|
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/447/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4894
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4894/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4894/comments
|
https://api.github.com/repos/ollama/ollama/issues/4894/events
|
https://github.com/ollama/ollama/issues/4894
| 2,339,536,264
|
I_kwDOJ0Z1Ps6Lcn2I
| 4,894
|
Feature: Allow setting OLLAMA_NUM_PARALLEL per model
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-06-07T03:55:36
| 2024-10-24T18:17:13
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be great if you could set OLLAMA_NUM_PARALLEL per model.
Example use case:
- You have one large "smart" model you only ever want one request at a time going to to avoid using all your memory.
- You have a smaller "fast" fast model (or just one with a smaller context) that you might want to allow a number of parallel requests to.
Perhaps this could be configured with a [modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md) and corresponding [API parameter](https://github.com/ollama/ollama/blob/main/docs/api.md#parameters) rather than at launch time?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4894/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4894/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5748
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5748/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5748/comments
|
https://api.github.com/repos/ollama/ollama/issues/5748/events
|
https://github.com/ollama/ollama/issues/5748
| 2,413,697,254
|
I_kwDOJ0Z1Ps6P3hjm
| 5,748
|
ShipIt folder taking 1GB
|
{
"login": "cliffordh",
"id": 1755156,
"node_id": "MDQ6VXNlcjE3NTUxNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1755156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cliffordh",
"html_url": "https://github.com/cliffordh",
"followers_url": "https://api.github.com/users/cliffordh/followers",
"following_url": "https://api.github.com/users/cliffordh/following{/other_user}",
"gists_url": "https://api.github.com/users/cliffordh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cliffordh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cliffordh/subscriptions",
"organizations_url": "https://api.github.com/users/cliffordh/orgs",
"repos_url": "https://api.github.com/users/cliffordh/repos",
"events_url": "https://api.github.com/users/cliffordh/events{/privacy}",
"received_events_url": "https://api.github.com/users/cliffordh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677279472,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A",
"url": "https://api.github.com/repos/ollama/ollama/labels/macos",
"name": "macos",
"color": "E2DBC0",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-07-17T13:56:58
| 2024-07-17T18:54:43
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Running a junk file scanner it detected the folder com.electron.ollama.ShipIt taking almost 1GB in ~/Library/Caches. This should be automatically cleared.
### OS
macOS
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5748/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3688
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3688/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3688/comments
|
https://api.github.com/repos/ollama/ollama/issues/3688/events
|
https://github.com/ollama/ollama/pull/3688
| 2,247,213,761
|
PR_kwDOJ0Z1Ps5s3a41
| 3,688
|
exmaple error:ollama list models with raw-Name
|
{
"login": "KevinLiangX",
"id": 40968187,
"node_id": "MDQ6VXNlcjQwOTY4MTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/40968187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KevinLiangX",
"html_url": "https://github.com/KevinLiangX",
"followers_url": "https://api.github.com/users/KevinLiangX/followers",
"following_url": "https://api.github.com/users/KevinLiangX/following{/other_user}",
"gists_url": "https://api.github.com/users/KevinLiangX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KevinLiangX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KevinLiangX/subscriptions",
"organizations_url": "https://api.github.com/users/KevinLiangX/orgs",
"repos_url": "https://api.github.com/users/KevinLiangX/repos",
"events_url": "https://api.github.com/users/KevinLiangX/events{/privacy}",
"received_events_url": "https://api.github.com/users/KevinLiangX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-17T02:41:18
| 2024-05-09T02:50:11
| 2024-05-09T02:50:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3688",
"html_url": "https://github.com/ollama/ollama/pull/3688",
"diff_url": "https://github.com/ollama/ollama/pull/3688.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3688.patch",
"merged_at": null
}
|

修改之后:

|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3688/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/332
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/332/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/332/comments
|
https://api.github.com/repos/ollama/ollama/issues/332/events
|
https://github.com/ollama/ollama/issues/332
| 1,847,107,942
|
I_kwDOJ0Z1Ps5uGKFm
| 332
|
only regenerate diff of embedding layer
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2023-08-11T16:25:43
| 2023-08-15T19:10:25
| 2023-08-15T19:10:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/332/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2609
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2609/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2609/comments
|
https://api.github.com/repos/ollama/ollama/issues/2609/events
|
https://github.com/ollama/ollama/issues/2609
| 2,143,713,954
|
I_kwDOJ0Z1Ps5_xnqi
| 2,609
|
[Question\Suggestion] Result of function calling.
|
{
"login": "gerwintmg",
"id": 17082189,
"node_id": "MDQ6VXNlcjE3MDgyMTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17082189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gerwintmg",
"html_url": "https://github.com/gerwintmg",
"followers_url": "https://api.github.com/users/gerwintmg/followers",
"following_url": "https://api.github.com/users/gerwintmg/following{/other_user}",
"gists_url": "https://api.github.com/users/gerwintmg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gerwintmg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerwintmg/subscriptions",
"organizations_url": "https://api.github.com/users/gerwintmg/orgs",
"repos_url": "https://api.github.com/users/gerwintmg/repos",
"events_url": "https://api.github.com/users/gerwintmg/events{/privacy}",
"received_events_url": "https://api.github.com/users/gerwintmg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-02-20T07:32:17
| 2024-11-06T18:55:04
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently i am experimenting with function calling (getting a json result), and returning the result from the function call to the LLM
when using the chat api you can specify system, user and assistent. I was wondering if we would be able to add the option of **function Result**
getting the following chat
```json
{
"type": "system",
"content": "When ask about the weather give a Json Output."
},
{
"type": "user",
"content": "What is the weather like in Brussels?"
},
{
"type": "assistent",
"content": "{
"City":"Brussel"
}"
},
{
"type": "function_result",
"content": "{
Temperature: "24 C"
Rain: "40%"
}"
},
{
"type": "assistent",
"content": "Currently it is 24 degrees Celsius and there is a 40% chance of rain. "
}
```
Or is there already a way to use the results of a function Call?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2609/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1635
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1635/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1635/comments
|
https://api.github.com/repos/ollama/ollama/issues/1635/events
|
https://github.com/ollama/ollama/issues/1635
| 2,050,936,308
|
I_kwDOJ0Z1Ps56Ps30
| 1,635
|
[Request] Reduce Gocyclo
|
{
"login": "H0llyW00dzZ",
"id": 17626300,
"node_id": "MDQ6VXNlcjE3NjI2MzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/17626300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/H0llyW00dzZ",
"html_url": "https://github.com/H0llyW00dzZ",
"followers_url": "https://api.github.com/users/H0llyW00dzZ/followers",
"following_url": "https://api.github.com/users/H0llyW00dzZ/following{/other_user}",
"gists_url": "https://api.github.com/users/H0llyW00dzZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/H0llyW00dzZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/H0llyW00dzZ/subscriptions",
"organizations_url": "https://api.github.com/users/H0llyW00dzZ/orgs",
"repos_url": "https://api.github.com/users/H0llyW00dzZ/repos",
"events_url": "https://api.github.com/users/H0llyW00dzZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/H0llyW00dzZ/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-20T16:53:17
| 2024-05-07T07:57:03
| 2024-05-06T23:33:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

not good for AI when it's too complex
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1635/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4959
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4959/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4959/comments
|
https://api.github.com/repos/ollama/ollama/issues/4959/events
|
https://github.com/ollama/ollama/pull/4959
| 2,343,393,852
|
PR_kwDOJ0Z1Ps5x8yAc
| 4,959
|
Add new community integration (TypingMind)
|
{
"login": "trungdq88",
"id": 4214509,
"node_id": "MDQ6VXNlcjQyMTQ1MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4214509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trungdq88",
"html_url": "https://github.com/trungdq88",
"followers_url": "https://api.github.com/users/trungdq88/followers",
"following_url": "https://api.github.com/users/trungdq88/following{/other_user}",
"gists_url": "https://api.github.com/users/trungdq88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trungdq88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trungdq88/subscriptions",
"organizations_url": "https://api.github.com/users/trungdq88/orgs",
"repos_url": "https://api.github.com/users/trungdq88/repos",
"events_url": "https://api.github.com/users/trungdq88/events{/privacy}",
"received_events_url": "https://api.github.com/users/trungdq88/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-10T09:27:45
| 2024-11-21T10:45:03
| 2024-11-21T10:45:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4959",
"html_url": "https://github.com/ollama/ollama/pull/4959",
"diff_url": "https://github.com/ollama/ollama/pull/4959.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4959.patch",
"merged_at": null
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4959/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1148
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1148/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1148/comments
|
https://api.github.com/repos/ollama/ollama/issues/1148/events
|
https://github.com/ollama/ollama/issues/1148
| 1,996,063,151
|
I_kwDOJ0Z1Ps52-YGv
| 1,148
|
running any model crashes my Ubuntu 22.04 LTS system with 2 nvidia GPUs RTX 3060
|
{
"login": "pexus",
"id": 1809523,
"node_id": "MDQ6VXNlcjE4MDk1MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1809523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pexus",
"html_url": "https://github.com/pexus",
"followers_url": "https://api.github.com/users/pexus/followers",
"following_url": "https://api.github.com/users/pexus/following{/other_user}",
"gists_url": "https://api.github.com/users/pexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pexus/subscriptions",
"organizations_url": "https://api.github.com/users/pexus/orgs",
"repos_url": "https://api.github.com/users/pexus/repos",
"events_url": "https://api.github.com/users/pexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/pexus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2023-11-16T05:01:12
| 2023-11-18T18:35:39
| 2023-11-18T18:35:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It used to work before. The latest version just crashes my system. I tried running xwinlm, mistral and llama2.
I have an AMD FX 830, 2 nvidia GPU RTX 3060 with 12GB each and CPU mem of 32GB. Running on Ubuntu 22.04 LTS.
I am using the latest CUDA toolkit 12.3
|
{
"login": "pexus",
"id": 1809523,
"node_id": "MDQ6VXNlcjE4MDk1MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1809523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pexus",
"html_url": "https://github.com/pexus",
"followers_url": "https://api.github.com/users/pexus/followers",
"following_url": "https://api.github.com/users/pexus/following{/other_user}",
"gists_url": "https://api.github.com/users/pexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pexus/subscriptions",
"organizations_url": "https://api.github.com/users/pexus/orgs",
"repos_url": "https://api.github.com/users/pexus/repos",
"events_url": "https://api.github.com/users/pexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/pexus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1148/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4075
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4075/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4075/comments
|
https://api.github.com/repos/ollama/ollama/issues/4075/events
|
https://github.com/ollama/ollama/issues/4075
| 2,273,307,461
|
I_kwDOJ0Z1Ps6Hf-tF
| 4,075
|
invalid file magic while importing llama3 70b into ollama
|
{
"login": "SakuraEntropia",
"id": 61424969,
"node_id": "MDQ6VXNlcjYxNDI0OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/61424969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SakuraEntropia",
"html_url": "https://github.com/SakuraEntropia",
"followers_url": "https://api.github.com/users/SakuraEntropia/followers",
"following_url": "https://api.github.com/users/SakuraEntropia/following{/other_user}",
"gists_url": "https://api.github.com/users/SakuraEntropia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SakuraEntropia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SakuraEntropia/subscriptions",
"organizations_url": "https://api.github.com/users/SakuraEntropia/orgs",
"repos_url": "https://api.github.com/users/SakuraEntropia/repos",
"events_url": "https://api.github.com/users/SakuraEntropia/events{/privacy}",
"received_events_url": "https://api.github.com/users/SakuraEntropia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-05-01T10:41:38
| 2024-06-25T23:36:28
| 2024-06-25T23:36:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
the model i used is from https://hf-mirror.com/mradermacher/llama-3-70B-instruct-uncensored-i1-GGUF,
the issues are like this
`PS D:\Ollama> ollama create llama3:70b -f Modelfile
transferring model data
creating model layer
Error: invalid file magic`
the model couldn't be successfully booted into ollama.
Is llama3 supported to be imported?
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4075/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3721
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3721/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3721/comments
|
https://api.github.com/repos/ollama/ollama/issues/3721/events
|
https://github.com/ollama/ollama/issues/3721
| 2,249,625,523
|
I_kwDOJ0Z1Ps6GFo-z
| 3,721
|
NEED WizardLM-2-8*22B Q6
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-18T02:46:06
| 2024-04-20T08:56:00
| 2024-04-20T08:56:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
NEED WizardLM-2-8*22B Q6
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3721/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2038
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2038/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2038/comments
|
https://api.github.com/repos/ollama/ollama/issues/2038/events
|
https://github.com/ollama/ollama/issues/2038
| 2,087,322,756
|
I_kwDOJ0Z1Ps58agSE
| 2,038
|
Minimal use of GPU in Docker (windows) with 10/33 layers loaded
|
{
"login": "sumitsodhi88",
"id": 149290101,
"node_id": "U_kgDOCOX8dQ",
"avatar_url": "https://avatars.githubusercontent.com/u/149290101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumitsodhi88",
"html_url": "https://github.com/sumitsodhi88",
"followers_url": "https://api.github.com/users/sumitsodhi88/followers",
"following_url": "https://api.github.com/users/sumitsodhi88/following{/other_user}",
"gists_url": "https://api.github.com/users/sumitsodhi88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumitsodhi88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumitsodhi88/subscriptions",
"organizations_url": "https://api.github.com/users/sumitsodhi88/orgs",
"repos_url": "https://api.github.com/users/sumitsodhi88/repos",
"events_url": "https://api.github.com/users/sumitsodhi88/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumitsodhi88/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-01-18T01:53:21
| 2024-03-11T18:31:58
| 2024-03-11T18:31:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
my GPU is being used 23% while cpu is at 100% while using a docker image in windows environment.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2038/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7910
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7910/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7910/comments
|
https://api.github.com/repos/ollama/ollama/issues/7910/events
|
https://github.com/ollama/ollama/issues/7910
| 2,711,933,320
|
I_kwDOJ0Z1Ps6hpNGI
| 7,910
|
tool parsing issues with "'"
|
{
"login": "fce2",
"id": 16529960,
"node_id": "MDQ6VXNlcjE2NTI5OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16529960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fce2",
"html_url": "https://github.com/fce2",
"followers_url": "https://api.github.com/users/fce2/followers",
"following_url": "https://api.github.com/users/fce2/following{/other_user}",
"gists_url": "https://api.github.com/users/fce2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fce2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fce2/subscriptions",
"organizations_url": "https://api.github.com/users/fce2/orgs",
"repos_url": "https://api.github.com/users/fce2/repos",
"events_url": "https://api.github.com/users/fce2/events{/privacy}",
"received_events_url": "https://api.github.com/users/fce2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 4
| 2024-12-02T13:30:09
| 2024-12-09T21:22:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
difficult to see in the title: ' is the problem.
when i ask my ai to "execute a python example" it generates something like "print('...')" but truncates at the 1st ':
"model": "llama3.1:8b-instruct-fp16",
"created_at": "2024-12-02T13:26:55.1045197Z",
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"function": {
"name": "execute_python",
"arguments": {
"code": "print("
}
}
}
]
},
"done_reason": "stop",
"done": true,
"total_duration": 574508000,
"load_duration": 10134500,
"prompt_eval_count": 1371,
"prompt_eval_duration": 3000000,
"eval_count": 18,
"eval_duration": 559000000
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "fce2",
"id": 16529960,
"node_id": "MDQ6VXNlcjE2NTI5OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16529960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fce2",
"html_url": "https://github.com/fce2",
"followers_url": "https://api.github.com/users/fce2/followers",
"following_url": "https://api.github.com/users/fce2/following{/other_user}",
"gists_url": "https://api.github.com/users/fce2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fce2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fce2/subscriptions",
"organizations_url": "https://api.github.com/users/fce2/orgs",
"repos_url": "https://api.github.com/users/fce2/repos",
"events_url": "https://api.github.com/users/fce2/events{/privacy}",
"received_events_url": "https://api.github.com/users/fce2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7910/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/2552
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2552/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2552/comments
|
https://api.github.com/repos/ollama/ollama/issues/2552/events
|
https://github.com/ollama/ollama/pull/2552
| 2,139,634,632
|
PR_kwDOJ0Z1Ps5nJJHA
| 2,552
|
Fix duplicate menus on update and exit on signals
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-16T23:35:09
| 2024-02-17T01:23:40
| 2024-02-17T01:23:37
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2552",
"html_url": "https://github.com/ollama/ollama/pull/2552",
"diff_url": "https://github.com/ollama/ollama/pull/2552.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2552.patch",
"merged_at": "2024-02-17T01:23:37"
}
|
Also fixes a few fit-and-finish items for better developer experience
Fixes #2521
Fixes #2522
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2552/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/352
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/352/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/352/comments
|
https://api.github.com/repos/ollama/ollama/issues/352/events
|
https://github.com/ollama/ollama/issues/352
| 1,851,591,225
|
I_kwDOJ0Z1Ps5uXQo5
| 352
|
crash on allocated size greater than the recommended max working set size
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-08-15T14:57:48
| 2023-09-07T13:35:01
| 2023-09-07T13:35:00
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When trying to load a large context window ollama crashed due to llama.cpp throwing an exception:
```
size = 160.00 MB, (12018.69 / 10922.67), warning: current allocated size is greater than the recommended max working set size
ggml_metal_graph_compute: command buffer 0 failed with status 5
GGML_ASSERT: ggml-metal.m:1177: false
SIGABRT: abort
PC=0x1a0f58724 m=10 sigcode=0
signal arrived during cgo execution
```
To reproduce:
- download a 16K model
```
FROM llongma-2-7b.ggmlv3.q4_0.bin
PARAMETER num_ctx 16000
TEMPLATE """
{{ .Prompt }}
"""
```
Specs:
<img width="277" alt="image" src="https://github.com/jmorganca/ollama/assets/5853428/f547523e-af64-46c6-9af0-82e81041d328">
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/352/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/352/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5673
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5673/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5673/comments
|
https://api.github.com/repos/ollama/ollama/issues/5673/events
|
https://github.com/ollama/ollama/issues/5673
| 2,406,889,988
|
I_kwDOJ0Z1Ps6PdjoE
| 5,673
|
Ollama spins up USB HDD
|
{
"login": "bkev",
"id": 10973030,
"node_id": "MDQ6VXNlcjEwOTczMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/10973030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bkev",
"html_url": "https://github.com/bkev",
"followers_url": "https://api.github.com/users/bkev/followers",
"following_url": "https://api.github.com/users/bkev/following{/other_user}",
"gists_url": "https://api.github.com/users/bkev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bkev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bkev/subscriptions",
"organizations_url": "https://api.github.com/users/bkev/orgs",
"repos_url": "https://api.github.com/users/bkev/repos",
"events_url": "https://api.github.com/users/bkev/events{/privacy}",
"received_events_url": "https://api.github.com/users/bkev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-07-13T12:56:21
| 2024-09-30T22:55:49
| 2024-09-30T22:55:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When ever I run an Ollama model, Ollama now spins up my external USB hard drive despite not needing to as all the models are on the internal drive.
I can't say I've always noticed it doing this, although it has always spun up the hard drive when upgrading as it seems to scan USB?
Is there anyway to stop Ollama doing this by disabling any kind of USB scan functionality etc?
This is on a raspberry pi with internal nvme (where the models are) and an external USB that Ollama didn't need.
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.2.3
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5673/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6265
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6265/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6265/comments
|
https://api.github.com/repos/ollama/ollama/issues/6265/events
|
https://github.com/ollama/ollama/issues/6265
| 2,456,705,221
|
I_kwDOJ0Z1Ps6SbljF
| 6,265
|
Not a feature request, not a bug, problem with LLama3.1
|
{
"login": "airdogvan",
"id": 31630759,
"node_id": "MDQ6VXNlcjMxNjMwNzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31630759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/airdogvan",
"html_url": "https://github.com/airdogvan",
"followers_url": "https://api.github.com/users/airdogvan/followers",
"following_url": "https://api.github.com/users/airdogvan/following{/other_user}",
"gists_url": "https://api.github.com/users/airdogvan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/airdogvan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/airdogvan/subscriptions",
"organizations_url": "https://api.github.com/users/airdogvan/orgs",
"repos_url": "https://api.github.com/users/airdogvan/repos",
"events_url": "https://api.github.com/users/airdogvan/events{/privacy}",
"received_events_url": "https://api.github.com/users/airdogvan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-08-08T22:01:09
| 2024-08-11T20:17:23
| 2024-08-09T22:33:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have several models all running fine on ollama, including Llama3. Lama3.1 gives very long answers, then repeats them and finally prints random characters and if I didn't use the interface to stop seemingly would go on forever.
Running llama3.1 with the same parameters that seem to be ok with all other models.
Any suggestions welcome as it seems to be quite a bit more powerful than all other models.
|
{
"login": "airdogvan",
"id": 31630759,
"node_id": "MDQ6VXNlcjMxNjMwNzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31630759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/airdogvan",
"html_url": "https://github.com/airdogvan",
"followers_url": "https://api.github.com/users/airdogvan/followers",
"following_url": "https://api.github.com/users/airdogvan/following{/other_user}",
"gists_url": "https://api.github.com/users/airdogvan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/airdogvan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/airdogvan/subscriptions",
"organizations_url": "https://api.github.com/users/airdogvan/orgs",
"repos_url": "https://api.github.com/users/airdogvan/repos",
"events_url": "https://api.github.com/users/airdogvan/events{/privacy}",
"received_events_url": "https://api.github.com/users/airdogvan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6265/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1258
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1258/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1258/comments
|
https://api.github.com/repos/ollama/ollama/issues/1258/events
|
https://github.com/ollama/ollama/pull/1258
| 2,008,889,413
|
PR_kwDOJ0Z1Ps5gQ9B8
| 1,258
|
warn if running a ggml model file
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-23T23:29:53
| 2023-12-06T23:54:34
| 2023-11-24T19:02:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1258",
"html_url": "https://github.com/ollama/ollama/pull/1258",
"diff_url": "https://github.com/ollama/ollama/pull/1258.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1258.patch",
"merged_at": null
}
|
If the model a user is running will the use ggml runtime log a warning that prompts them to check for update to try and pull the gguf version of the model.
```
ollama run orca-mini
This model requires an update to work in future versions of Ollama. Check for update now? (y/n) y
pulling manifest
pulling 4de14feaabf8... 100% ▕██████▏(903 MB/903 MB)
pulling 8971eb8e89ce... 100% ▕██████▏(107 B/107 B)
pulling e7731c6d6962... 100% ▕██████▏(34 B/34 B)
pulling 905da7e7adc2... 100% ▕██████▏(76 B/76 B)
pulling 1bb164b05eb4... 100% ▕██████▏(460 B/460 B)
verifying sha256 digest
writing manifest
removing any unused layers
success
>>>
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1258/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2846
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2846/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2846/comments
|
https://api.github.com/repos/ollama/ollama/issues/2846/events
|
https://github.com/ollama/ollama/issues/2846
| 2,162,262,359
|
I_kwDOJ0Z1Ps6A4YFX
| 2,846
|
/read {filename} command to read a prompt from a file
|
{
"login": "nyimbi",
"id": 2156185,
"node_id": "MDQ6VXNlcjIxNTYxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2156185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nyimbi",
"html_url": "https://github.com/nyimbi",
"followers_url": "https://api.github.com/users/nyimbi/followers",
"following_url": "https://api.github.com/users/nyimbi/following{/other_user}",
"gists_url": "https://api.github.com/users/nyimbi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nyimbi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nyimbi/subscriptions",
"organizations_url": "https://api.github.com/users/nyimbi/orgs",
"repos_url": "https://api.github.com/users/nyimbi/repos",
"events_url": "https://api.github.com/users/nyimbi/events{/privacy}",
"received_events_url": "https://api.github.com/users/nyimbi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-02-29T23:24:38
| 2024-02-29T23:24:38
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be super useful to be able to read a prompt from a file and execute it.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2846/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5404
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5404/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5404/comments
|
https://api.github.com/repos/ollama/ollama/issues/5404/events
|
https://github.com/ollama/ollama/issues/5404
| 2,383,291,047
|
I_kwDOJ0Z1Ps6ODiKn
| 5,404
|
ollama create model success but ps command returns empty
|
{
"login": "tammypi",
"id": 4264858,
"node_id": "MDQ6VXNlcjQyNjQ4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4264858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tammypi",
"html_url": "https://github.com/tammypi",
"followers_url": "https://api.github.com/users/tammypi/followers",
"following_url": "https://api.github.com/users/tammypi/following{/other_user}",
"gists_url": "https://api.github.com/users/tammypi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tammypi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tammypi/subscriptions",
"organizations_url": "https://api.github.com/users/tammypi/orgs",
"repos_url": "https://api.github.com/users/tammypi/repos",
"events_url": "https://api.github.com/users/tammypi/events{/privacy}",
"received_events_url": "https://api.github.com/users/tammypi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-01T09:13:32
| 2024-07-02T11:19:00
| 2024-07-02T11:19:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I used `ollama create emailphishing -f emailphishing.mf ` command, and it printed "success":

When I used command `ollama ps`, and it returned empty list:

### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "tammypi",
"id": 4264858,
"node_id": "MDQ6VXNlcjQyNjQ4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4264858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tammypi",
"html_url": "https://github.com/tammypi",
"followers_url": "https://api.github.com/users/tammypi/followers",
"following_url": "https://api.github.com/users/tammypi/following{/other_user}",
"gists_url": "https://api.github.com/users/tammypi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tammypi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tammypi/subscriptions",
"organizations_url": "https://api.github.com/users/tammypi/orgs",
"repos_url": "https://api.github.com/users/tammypi/repos",
"events_url": "https://api.github.com/users/tammypi/events{/privacy}",
"received_events_url": "https://api.github.com/users/tammypi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5404/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7176
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7176/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7176/comments
|
https://api.github.com/repos/ollama/ollama/issues/7176/events
|
https://github.com/ollama/ollama/issues/7176
| 2,582,356,628
|
I_kwDOJ0Z1Ps6Z66KU
| 7,176
|
Error: exception done_getting_tensors: wrong number of tensors; expected 255, got 254
|
{
"login": "GeorgeR",
"id": 897457,
"node_id": "MDQ6VXNlcjg5NzQ1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/897457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeorgeR",
"html_url": "https://github.com/GeorgeR",
"followers_url": "https://api.github.com/users/GeorgeR/followers",
"following_url": "https://api.github.com/users/GeorgeR/following{/other_user}",
"gists_url": "https://api.github.com/users/GeorgeR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GeorgeR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeorgeR/subscriptions",
"organizations_url": "https://api.github.com/users/GeorgeR/orgs",
"repos_url": "https://api.github.com/users/GeorgeR/repos",
"events_url": "https://api.github.com/users/GeorgeR/events{/privacy}",
"received_events_url": "https://api.github.com/users/GeorgeR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-12T00:02:11
| 2024-10-12T06:16:05
| 2024-10-12T06:15:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After installing Ollama, pulling llama3.2 and trying to run it, I get this error. I saw other threads regarding the same, but in my case updating to the latest ollama didn't help.
Here's what --version dumps out:
ollama version is 0.1.30
Warning: client version is 0.3.12
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.30
|
{
"login": "GeorgeR",
"id": 897457,
"node_id": "MDQ6VXNlcjg5NzQ1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/897457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeorgeR",
"html_url": "https://github.com/GeorgeR",
"followers_url": "https://api.github.com/users/GeorgeR/followers",
"following_url": "https://api.github.com/users/GeorgeR/following{/other_user}",
"gists_url": "https://api.github.com/users/GeorgeR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GeorgeR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeorgeR/subscriptions",
"organizations_url": "https://api.github.com/users/GeorgeR/orgs",
"repos_url": "https://api.github.com/users/GeorgeR/repos",
"events_url": "https://api.github.com/users/GeorgeR/events{/privacy}",
"received_events_url": "https://api.github.com/users/GeorgeR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7176/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3590
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3590/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3590/comments
|
https://api.github.com/repos/ollama/ollama/issues/3590/events
|
https://github.com/ollama/ollama/issues/3590
| 2,237,311,481
|
I_kwDOJ0Z1Ps6FWqn5
| 3,590
|
Concurrency scheduling is not supported.
|
{
"login": "hwfancyz7k",
"id": 148410629,
"node_id": "U_kgDOCNiRBQ",
"avatar_url": "https://avatars.githubusercontent.com/u/148410629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwfancyz7k",
"html_url": "https://github.com/hwfancyz7k",
"followers_url": "https://api.github.com/users/hwfancyz7k/followers",
"following_url": "https://api.github.com/users/hwfancyz7k/following{/other_user}",
"gists_url": "https://api.github.com/users/hwfancyz7k/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwfancyz7k/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwfancyz7k/subscriptions",
"organizations_url": "https://api.github.com/users/hwfancyz7k/orgs",
"repos_url": "https://api.github.com/users/hwfancyz7k/repos",
"events_url": "https://api.github.com/users/hwfancyz7k/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwfancyz7k/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-11T09:36:21
| 2024-04-12T18:47:53
| 2024-04-12T18:47:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have multiple Intel CPUs and NVIDIA GPUs, but the generate interface can only initiate one request at a time. Even though I have sufficient resources, it gets stuck without further scheduling. This is a hot bug, please fix it as soon as possible.
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3590/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4321
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4321/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4321/comments
|
https://api.github.com/repos/ollama/ollama/issues/4321/events
|
https://github.com/ollama/ollama/pull/4321
| 2,290,282,452
|
PR_kwDOJ0Z1Ps5vINVu
| 4,321
|
Use `--quantize` flag and `quantize` api parameter
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-10T19:21:05
| 2024-05-10T20:06:14
| 2024-05-10T20:06:13
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4321",
"html_url": "https://github.com/ollama/ollama/pull/4321",
"diff_url": "https://github.com/ollama/ollama/pull/4321.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4321.patch",
"merged_at": "2024-05-10T20:06:13"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4321/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2713
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2713/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2713/comments
|
https://api.github.com/repos/ollama/ollama/issues/2713/events
|
https://github.com/ollama/ollama/issues/2713
| 2,151,541,844
|
I_kwDOJ0Z1Ps6APexU
| 2,713
|
llava13b memory access faults on api/chat (firts call fine, fail on second one)
|
{
"login": "uneuro",
"id": 5337885,
"node_id": "MDQ6VXNlcjUzMzc4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5337885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uneuro",
"html_url": "https://github.com/uneuro",
"followers_url": "https://api.github.com/users/uneuro/followers",
"following_url": "https://api.github.com/users/uneuro/following{/other_user}",
"gists_url": "https://api.github.com/users/uneuro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uneuro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uneuro/subscriptions",
"organizations_url": "https://api.github.com/users/uneuro/orgs",
"repos_url": "https://api.github.com/users/uneuro/repos",
"events_url": "https://api.github.com/users/uneuro/events{/privacy}",
"received_events_url": "https://api.github.com/users/uneuro/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-02-23T17:50:56
| 2024-12-19T21:37:07
| 2024-12-19T21:37:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

I have 2x7900xtx
if I close ollama after each requests and specify only 1 gpu it's running well.
I tried 8 times to run ollama server and close after a request, at some point it was broken too cause closing wasn't clearing the vram
<img width="1878" alt="image" src="https://github.com/ollama/ollama/assets/5337885/f64e242b-14d5-4bb9-a741-f425db2cc4e4">
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2713/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7706
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7706/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7706/comments
|
https://api.github.com/repos/ollama/ollama/issues/7706/events
|
https://github.com/ollama/ollama/pull/7706
| 2,665,911,664
|
PR_kwDOJ0Z1Ps6CKKAi
| 7,706
|
feat: add VT chat app to README
|
{
"login": "vinhnx",
"id": 1097578,
"node_id": "MDQ6VXNlcjEwOTc1Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1097578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinhnx",
"html_url": "https://github.com/vinhnx",
"followers_url": "https://api.github.com/users/vinhnx/followers",
"following_url": "https://api.github.com/users/vinhnx/following{/other_user}",
"gists_url": "https://api.github.com/users/vinhnx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinhnx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinhnx/subscriptions",
"organizations_url": "https://api.github.com/users/vinhnx/orgs",
"repos_url": "https://api.github.com/users/vinhnx/repos",
"events_url": "https://api.github.com/users/vinhnx/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinhnx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-11-17T14:07:38
| 2024-11-18T03:54:29
| 2024-11-17T22:35:41
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7706",
"html_url": "https://github.com/ollama/ollama/pull/7706",
"diff_url": "https://github.com/ollama/ollama/pull/7706.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7706.patch",
"merged_at": "2024-11-17T22:35:41"
}
|
Add VT app, a minimal multimodal AI chat app with dynamic conversation routing, support both models backend by Ollama.
Thank you!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7706/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3627
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3627/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3627/comments
|
https://api.github.com/repos/ollama/ollama/issues/3627/events
|
https://github.com/ollama/ollama/pull/3627
| 2,241,641,473
|
PR_kwDOJ0Z1Ps5skjK8
| 3,627
|
Update llama.cpp submodule to `4bd0f93`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-13T16:16:29
| 2024-04-15T11:55:02
| 2024-04-13T17:43:02
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3627",
"html_url": "https://github.com/ollama/ollama/pull/3627",
"diff_url": "https://github.com/ollama/ollama/pull/3627.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3627.patch",
"merged_at": "2024-04-13T17:43:02"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3627/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2528
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2528/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2528/comments
|
https://api.github.com/repos/ollama/ollama/issues/2528/events
|
https://github.com/ollama/ollama/pull/2528
| 2,137,579,266
|
PR_kwDOJ0Z1Ps5nCGJN
| 2,528
|
Explicitly disable AVX2 on GPU builds
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-15T22:52:34
| 2024-02-19T21:13:08
| 2024-02-16T00:06:34
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2528",
"html_url": "https://github.com/ollama/ollama/pull/2528",
"diff_url": "https://github.com/ollama/ollama/pull/2528.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2528.patch",
"merged_at": null
}
|
Even though we weren't setting it to on, somewhere in the cmake config it was getting toggled on. By explicitly setting it to off, we get `/arch:AVX` as intended.
Fixes #2527
Input:
```
generating config with: cmake -S ../llama.cpp -B ../llama.cpp/build/windows/amd64/cuda_v11.3 -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -A x64 -DCMAKE_VERBOSE_MAKEFILE=on -DLLAMA_SERVER_VERBOSE=on -DLLAMA_CUBLAS=ON -DLLAMA_AVX=on -DLLAMA_AVX2=off -DCUDAToolkit_INCLUDE_DIR=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include -DCMAKE_CUDA_ARCHITECTURES=50;52;61;70;75;80
```
Example Compile: (note the correct `/arch`
```
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64\CL.exe /c /I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" /Zi /W3 /WX- /diagnostics:column /O2 /Ob1 /D WIN32 /D _WINDOWS /D NDEBUG /D GGML_USE_CUBLAS /D GGML_CUDA_DMMV_X=32 /D GGML_CUDA_MMV_Y=1 /D K_QUANTS_PER_ITERATION=2 /D GGML_CUDA_PEER_MAX_BATCH_SIZE=128 /D _CRT_SECURE_NO_WARNINGS /D _XOPEN_SOURCE=600 /D "CMAKE_INTDIR=\"RelWithDebInfo\"" /D _MBCS /Gm- /EHsc /MD /GS /arch:AVX /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /GR /Fo"build_info.dir\RelWithDebInfo\\" /Fd"build_info.dir\RelWithDebInfo\build_info.pdb" /external:W3 /Gd /TP /errorReport:queue "C:\Users\danie\code\ollama\llm\llama.cpp\common\build-info.cpp"
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2528/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3446
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3446/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3446/comments
|
https://api.github.com/repos/ollama/ollama/issues/3446/events
|
https://github.com/ollama/ollama/issues/3446
| 2,219,564,146
|
I_kwDOJ0Z1Ps6ES9xy
| 3,446
|
ollama not using AMD GPU on linux
|
{
"login": "jab416171",
"id": 345752,
"node_id": "MDQ6VXNlcjM0NTc1Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/345752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jab416171",
"html_url": "https://github.com/jab416171",
"followers_url": "https://api.github.com/users/jab416171/followers",
"following_url": "https://api.github.com/users/jab416171/following{/other_user}",
"gists_url": "https://api.github.com/users/jab416171/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jab416171/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jab416171/subscriptions",
"organizations_url": "https://api.github.com/users/jab416171/orgs",
"repos_url": "https://api.github.com/users/jab416171/repos",
"events_url": "https://api.github.com/users/jab416171/events{/privacy}",
"received_events_url": "https://api.github.com/users/jab416171/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-04-02T04:57:17
| 2024-05-05T18:17:07
| 2024-05-05T18:17:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama is only using my CPU. I've tried running it with `ROCR_VISIBLE_DEVICES=0 ollama serve` but that doesn't seem to change anything.
```
time=2024-04-01T22:37:03.207-06:00 level=INFO source=routes.go:1118 msg="Listening on 127.0.0.1:11434 (version 0.1.30)"
time=2024-04-01T22:37:03.207-06:00 level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama2592388870/runners ..."
time=2024-04-01T22:37:03.358-06:00 level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [cpu_avx2 cpu cpu_avx]"
time=2024-04-01T22:37:03.358-06:00 level=INFO source=gpu.go:115 msg="Detecting GPU type"
time=2024-04-01T22:37:03.358-06:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"
time=2024-04-01T22:37:03.385-06:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-04-01T22:37:03.385-06:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-04-01T22:37:03.397-06:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-04-01T22:37:03.397-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-01T22:37:03.397-06:00 level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-04-01T22:37:03.397-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1030]"
time=2024-04-01T22:37:03.404-06:00 level=INFO source=amd_linux.go:119 msg="amdgpu [0] gfx1030 is supported"
time=2024-04-01T22:37:03.404-06:00 level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 16368M"
time=2024-04-01T22:37:03.404-06:00 level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 16368M"
[GIN] 2024/04/01 - 22:42:28 | 200 | 30.06µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/04/01 - 22:42:28 | 200 | 16.663531ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/04/01 - 22:42:28 | 200 | 402.998µs | 127.0.0.1 | POST "/api/show"
time=2024-04-01T22:42:28.514-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-01T22:42:28.514-06:00 level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-04-01T22:42:28.514-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1030]"
time=2024-04-01T22:42:28.517-06:00 level=INFO source=amd_linux.go:119 msg="amdgpu [0] gfx1030 is supported"
time=2024-04-01T22:42:28.517-06:00 level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 16368M"
time=2024-04-01T22:42:28.517-06:00 level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 16368M"
time=2024-04-01T22:42:28.517-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-01T22:42:28.517-06:00 level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-04-01T22:42:28.517-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1030]"
time=2024-04-01T22:42:28.520-06:00 level=INFO source=amd_linux.go:119 msg="amdgpu [0] gfx1030 is supported"
time=2024-04-01T22:42:28.520-06:00 level=INFO source=amd_linux.go:246 msg="[0] amdgpu totalMemory 16368M"
time=2024-04-01T22:42:28.520-06:00 level=INFO source=amd_linux.go:247 msg="[0] amdgpu freeMemory 16368M"
time=2024-04-01T22:42:28.521-06:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama2592388870/runners/cpu_avx2/libext_server.so
time=2024-04-01T22:42:28.523-06:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama2592388870/runners/cpu_avx2/libext_server.so"
time=2024-04-01T22:42:28.523-06:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
```
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
arm64
### Platform
_No response_
### Ollama version
0.1.30
### GPU
AMD
### GPU info
```
Name: gfx1030
Marketing Name: AMD Radeon RX 6900 XT
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
```
### CPU
AMD
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3446/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1659
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1659/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1659/comments
|
https://api.github.com/repos/ollama/ollama/issues/1659/events
|
https://github.com/ollama/ollama/issues/1659
| 2,052,724,559
|
I_kwDOJ0Z1Ps56WhdP
| 1,659
|
Ollama push fails on slower downloads with a 403
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-12-21T16:25:44
| 2024-03-11T22:40:01
| 2024-03-11T22:40:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have a model I want to push but at only a 35Mbps upload speed (thanks Xfinity Gigabit), it takes about 25 minutes to upload. The problem is that when it takes longer than 20 minutes, it fails with this error:
```
ollama push mattw/gpt4-x-alpaca:latest
retrieving manifest
pushing 6bccfcf77d21... 31% ▕█████████████████████████ ▏ 2.3 GB/7.4 GB
Error: max retries exceeded: http status 403 Forbidden: <?xml version="1.0" encoding="UTF-8"?><Error><Code>ExpiredRequest</Code><Message>Request has expired</Message></Error>
```
What is especially interesting is that just a minute or two before that output I saw this:
```
❯ ollama push mattw/gpt4-x-alpaca:latest
retrieving manifest
pushing 6bccfcf77d21... 81% ▕██████████████████████████████████████████████████████████████████ ▏ 6.0 GB/7.4 GB
```
I just happened to take a screenshot. Notice that the progress is further along, but a minute later it went back down to 31% from 81%.
And then on restarting the push, I have to start over.
here is a video of it happening. Skip to about 30 seconds in for the good part: https://cln.sh/Kgggx7lf
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1659/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8055
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8055/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8055/comments
|
https://api.github.com/repos/ollama/ollama/issues/8055/events
|
https://github.com/ollama/ollama/pull/8055
| 2,734,291,036
|
PR_kwDOJ0Z1Ps6E7lK-
| 8,055
|
llama: enable JSON schema key ordering for generating grammars
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-12-11T23:57:04
| 2024-12-12T01:17:38
| 2024-12-12T01:17:36
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8055",
"html_url": "https://github.com/ollama/ollama/pull/8055",
"diff_url": "https://github.com/ollama/ollama/pull/8055.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8055.patch",
"merged_at": "2024-12-12T01:17:36"
}
|
Will do a follow up PR for updates to the command line with format
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8055/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7806
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7806/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7806/comments
|
https://api.github.com/repos/ollama/ollama/issues/7806/events
|
https://github.com/ollama/ollama/issues/7806
| 2,685,230,463
|
I_kwDOJ0Z1Ps6gDV1_
| 7,806
|
Context length not being updated
|
{
"login": "landoncrabtree",
"id": 34496757,
"node_id": "MDQ6VXNlcjM0NDk2NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/34496757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/landoncrabtree",
"html_url": "https://github.com/landoncrabtree",
"followers_url": "https://api.github.com/users/landoncrabtree/followers",
"following_url": "https://api.github.com/users/landoncrabtree/following{/other_user}",
"gists_url": "https://api.github.com/users/landoncrabtree/gists{/gist_id}",
"starred_url": "https://api.github.com/users/landoncrabtree/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/landoncrabtree/subscriptions",
"organizations_url": "https://api.github.com/users/landoncrabtree/orgs",
"repos_url": "https://api.github.com/users/landoncrabtree/repos",
"events_url": "https://api.github.com/users/landoncrabtree/events{/privacy}",
"received_events_url": "https://api.github.com/users/landoncrabtree/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-23T03:35:49
| 2024-11-23T17:19:15
| 2024-11-23T17:19:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```bash
ollama show llama3.2
Model
architecture llama
parameters 3.2B
context length 131072
embedding length 3072
quantization Q4_K_M
Parameters
stop "<|start_header_id|>"
stop "<|end_header_id|>"
stop "<|eot_id|>"
License
LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
Llama 3.2 Version Release Date: September 25, 2024
```
So from here, we can see `context length 131072`. However,
```
ollama run llama3.2 "Create study flashcards from this lecture transcription:\n\n $(cat samples/out.wav.txt )"
time=2024-11-22T21:35:01.163-06:00 level=WARN source=runner.go:122 msg="input exceeds context length" prompt=25109 limit=2048
```
Looks like the context length is being limited to 2048?
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
ollama version is 0.4.2
|
{
"login": "landoncrabtree",
"id": 34496757,
"node_id": "MDQ6VXNlcjM0NDk2NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/34496757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/landoncrabtree",
"html_url": "https://github.com/landoncrabtree",
"followers_url": "https://api.github.com/users/landoncrabtree/followers",
"following_url": "https://api.github.com/users/landoncrabtree/following{/other_user}",
"gists_url": "https://api.github.com/users/landoncrabtree/gists{/gist_id}",
"starred_url": "https://api.github.com/users/landoncrabtree/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/landoncrabtree/subscriptions",
"organizations_url": "https://api.github.com/users/landoncrabtree/orgs",
"repos_url": "https://api.github.com/users/landoncrabtree/repos",
"events_url": "https://api.github.com/users/landoncrabtree/events{/privacy}",
"received_events_url": "https://api.github.com/users/landoncrabtree/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7806/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2145
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2145/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2145/comments
|
https://api.github.com/repos/ollama/ollama/issues/2145/events
|
https://github.com/ollama/ollama/issues/2145
| 2,094,713,641
|
I_kwDOJ0Z1Ps582ssp
| 2,145
|
Streaming response with `text/event-stream`
|
{
"login": "radames",
"id": 102277,
"node_id": "MDQ6VXNlcjEwMjI3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/102277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/radames",
"html_url": "https://github.com/radames",
"followers_url": "https://api.github.com/users/radames/followers",
"following_url": "https://api.github.com/users/radames/following{/other_user}",
"gists_url": "https://api.github.com/users/radames/gists{/gist_id}",
"starred_url": "https://api.github.com/users/radames/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/radames/subscriptions",
"organizations_url": "https://api.github.com/users/radames/orgs",
"repos_url": "https://api.github.com/users/radames/repos",
"events_url": "https://api.github.com/users/radames/events{/privacy}",
"received_events_url": "https://api.github.com/users/radames/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-22T20:41:37
| 2024-03-11T19:22:55
| 2024-03-11T19:20:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Are you still considering adding `text/event-stream` for the Streaming Response ? Reading #294, it might make sense to have that option for browser-only clients.
For reference, here is a JavaScript client for text streaming that works on both the browser and Node.js.
https://github.com/huggingface/huggingface.js/blob/main/packages/inference/src/tasks/custom/streamingRequest.ts
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2145/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2145/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/2845
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2845/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2845/comments
|
https://api.github.com/repos/ollama/ollama/issues/2845/events
|
https://github.com/ollama/ollama/issues/2845
| 2,162,137,372
|
I_kwDOJ0Z1Ps6A35kc
| 2,845
|
Multiple requests at once
|
{
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/users/trymeouteh/followers",
"following_url": "https://api.github.com/users/trymeouteh/following{/other_user}",
"gists_url": "https://api.github.com/users/trymeouteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trymeouteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trymeouteh/subscriptions",
"organizations_url": "https://api.github.com/users/trymeouteh/orgs",
"repos_url": "https://api.github.com/users/trymeouteh/repos",
"events_url": "https://api.github.com/users/trymeouteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/trymeouteh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-29T21:31:55
| 2024-03-01T01:01:04
| 2024-03-01T01:01:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would take more computing power on the users machine, but to allow Ollama to be able to make multiple requests at once.
Lets say you have two terminal windows running and you ask the AI in Window 1 to do X and ask the AI in Windows 2 which will either be using the same model or a different model and ask it to do Y and it will do X and Y at the same time, and not wait to do Y once X is done.
Or even if you have one application that integrated Ollama and another application that integrated Ollama, to be able to do X in application 1 and Y in application 2 at the same time.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2845/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5212
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5212/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5212/comments
|
https://api.github.com/repos/ollama/ollama/issues/5212/events
|
https://github.com/ollama/ollama/pull/5212
| 2,367,611,285
|
PR_kwDOJ0Z1Ps5zPXlQ
| 5,212
|
build: add source label to Dockerfile
|
{
"login": "umglurf",
"id": 15076744,
"node_id": "MDQ6VXNlcjE1MDc2NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/15076744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/umglurf",
"html_url": "https://github.com/umglurf",
"followers_url": "https://api.github.com/users/umglurf/followers",
"following_url": "https://api.github.com/users/umglurf/following{/other_user}",
"gists_url": "https://api.github.com/users/umglurf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/umglurf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/umglurf/subscriptions",
"organizations_url": "https://api.github.com/users/umglurf/orgs",
"repos_url": "https://api.github.com/users/umglurf/repos",
"events_url": "https://api.github.com/users/umglurf/events{/privacy}",
"received_events_url": "https://api.github.com/users/umglurf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-06-22T06:25:02
| 2024-11-22T09:49:59
| 2024-11-21T11:16:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5212",
"html_url": "https://github.com/ollama/ollama/pull/5212",
"diff_url": "https://github.com/ollama/ollama/pull/5212.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5212.patch",
"merged_at": null
}
|
This allows tools such as dependabot and renovate
to find the source and changelog
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5212/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4790
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4790/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4790/comments
|
https://api.github.com/repos/ollama/ollama/issues/4790/events
|
https://github.com/ollama/ollama/issues/4790
| 2,329,844,537
|
I_kwDOJ0Z1Ps6K3ps5
| 4,790
|
command-r:35b uses too much memory
|
{
"login": "Zig1375",
"id": 2699034,
"node_id": "MDQ6VXNlcjI2OTkwMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2699034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zig1375",
"html_url": "https://github.com/Zig1375",
"followers_url": "https://api.github.com/users/Zig1375/followers",
"following_url": "https://api.github.com/users/Zig1375/following{/other_user}",
"gists_url": "https://api.github.com/users/Zig1375/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zig1375/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zig1375/subscriptions",
"organizations_url": "https://api.github.com/users/Zig1375/orgs",
"repos_url": "https://api.github.com/users/Zig1375/repos",
"events_url": "https://api.github.com/users/Zig1375/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zig1375/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-06-02T20:15:38
| 2024-06-25T17:11:14
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
My PC configuration is:
- GPU - Nvidia RTX 4070 (12Gb)
- 64 GB RAM
- When I do not use Ollama: 11.9Gb RAM is used-
- When I use Ollama with the default settings: 33.7 GB RAM is used
- `num_ctx` = 4k (4,096), then **35.1** GB RAM is used
- `num_ctx` = 8k (8,192), then **39.9** GB RAM is used
- `num_ctx` = 12k (12,288), then **44.2** GB RAM is used
- `num_ctx` = 32k (32,768), then **63.6** GB RAM is used (ALL memory is used)
The real context that is sent to the Ollama is only about 6k!!!
Even though this model supports a context of up to 128k, I'm unable to use even a 32k one. I'm not sure if this is a real bug, but it doesn't seem right to me that a 32k context would use 12GB of GPU RAM and 64GB of my PC's RAM.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.41
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4790/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4790/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8353
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8353/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8353/comments
|
https://api.github.com/repos/ollama/ollama/issues/8353/events
|
https://github.com/ollama/ollama/issues/8353
| 2,776,531,304
|
I_kwDOJ0Z1Ps6lfoFo
| 8,353
|
FROM path resolution uses working directory instead of Modelfile location
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2025-01-08T23:28:17
| 2025-01-11T00:14:09
| 2025-01-11T00:14:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**Description:**
When using a relative path in a FROM statement within a Modelfile, the path is resolved relative to the current working directory where the ollama command is executed, rather than relative to the Modelfile's location. This makes it difficult to create portable Modelfiles that reference local files, as they break when run from different directories.
Example to reproduce:
```
# Directory structure:
my-project/
├── models/
│ └── base.txt
└── custom/
└── Modelfile # Contains: FROM ../models/base.txt
# Running from project root works:
cd my-project
ollama create mymodel -f custom/Modelfile # ✓
# Running from custom/ directory fails:
cd my-project/custom
ollama create mymodel -f Modelfile # ✗ Error: pull model manifest: file does not exist
```
**Expected behavior:**
Relative paths in FROM statements should be resolved relative to the Modelfile's location
This would allow Modelfiles to reliably reference files in their parent/sibling directories regardless of where the ollama command is run from
**Current behavior:**
Paths are resolved relative to the current working directory where ollama is executed
Makes Modelfiles less portable as they break when run from different directories
Requires users to always run ollama from a specific directory or use absolute paths
### Ollama version
Development on main branch
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8353/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3154
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3154/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3154/comments
|
https://api.github.com/repos/ollama/ollama/issues/3154/events
|
https://github.com/ollama/ollama/issues/3154
| 2,187,316,395
|
I_kwDOJ0Z1Ps6CX8yr
| 3,154
|
Why Ollama is so terribly slow when I set format="json"
|
{
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/users/eliranwong/followers",
"following_url": "https://api.github.com/users/eliranwong/following{/other_user}",
"gists_url": "https://api.github.com/users/eliranwong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliranwong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliranwong/subscriptions",
"organizations_url": "https://api.github.com/users/eliranwong/orgs",
"repos_url": "https://api.github.com/users/eliranwong/repos",
"events_url": "https://api.github.com/users/eliranwong/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliranwong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 14
| 2024-03-14T21:43:14
| 2024-09-05T23:47:21
| 2024-03-16T15:08:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When I use format="json" the speed is extremely slow. However, I just tried llamafile with JSON output with the same prompt. What takes Ollama to response in two minutes, takes llamafile of the same model a few seconds. Please advise, if this issue is not to be sorted, obviously Ollama is not a suitable choice for developing applications that need JSON output. I really like Ollama as it is easy to be set up.
```
completion = ollama.chat(
model="mistral",
messages=messages,
format="json",
options=Options(
temperature=0.0,
num_ctx=100000,
num_predict=-1,
),
```
|
{
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/users/eliranwong/followers",
"following_url": "https://api.github.com/users/eliranwong/following{/other_user}",
"gists_url": "https://api.github.com/users/eliranwong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliranwong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliranwong/subscriptions",
"organizations_url": "https://api.github.com/users/eliranwong/orgs",
"repos_url": "https://api.github.com/users/eliranwong/repos",
"events_url": "https://api.github.com/users/eliranwong/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliranwong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3154/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8630
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8630/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8630/comments
|
https://api.github.com/repos/ollama/ollama/issues/8630/events
|
https://github.com/ollama/ollama/issues/8630
| 2,815,542,115
|
I_kwDOJ0Z1Ps6n0cNj
| 8,630
|
loss of speech
|
{
"login": "oguzhanet",
"id": 77545698,
"node_id": "MDQ6VXNlcjc3NTQ1Njk4",
"avatar_url": "https://avatars.githubusercontent.com/u/77545698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oguzhanet",
"html_url": "https://github.com/oguzhanet",
"followers_url": "https://api.github.com/users/oguzhanet/followers",
"following_url": "https://api.github.com/users/oguzhanet/following{/other_user}",
"gists_url": "https://api.github.com/users/oguzhanet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oguzhanet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oguzhanet/subscriptions",
"organizations_url": "https://api.github.com/users/oguzhanet/orgs",
"repos_url": "https://api.github.com/users/oguzhanet/repos",
"events_url": "https://api.github.com/users/oguzhanet/events{/privacy}",
"received_events_url": "https://api.github.com/users/oguzhanet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 5
| 2025-01-28T12:37:29
| 2025-01-28T14:04:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello, I am using llama3.1:8b. When I stop and reopen the application, the old chat disappears. How can I prevent this?
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8630/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3879
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3879/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3879/comments
|
https://api.github.com/repos/ollama/ollama/issues/3879/events
|
https://github.com/ollama/ollama/pull/3879
| 2,261,507,878
|
PR_kwDOJ0Z1Ps5tnUnV
| 3,879
|
Use ReadFull over CopyN when decoding GGUFs
|
{
"login": "brycereitano",
"id": 1928691,
"node_id": "MDQ6VXNlcjE5Mjg2OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1928691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brycereitano",
"html_url": "https://github.com/brycereitano",
"followers_url": "https://api.github.com/users/brycereitano/followers",
"following_url": "https://api.github.com/users/brycereitano/following{/other_user}",
"gists_url": "https://api.github.com/users/brycereitano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brycereitano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brycereitano/subscriptions",
"organizations_url": "https://api.github.com/users/brycereitano/orgs",
"repos_url": "https://api.github.com/users/brycereitano/repos",
"events_url": "https://api.github.com/users/brycereitano/events{/privacy}",
"received_events_url": "https://api.github.com/users/brycereitano/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2024-04-24T14:57:36
| 2024-04-26T00:17:10
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3879",
"html_url": "https://github.com/ollama/ollama/pull/3879",
"diff_url": "https://github.com/ollama/ollama/pull/3879.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3879.patch",
"merged_at": null
}
|
Opting to use `io.ReadFull` with preallocated `[]byte` slices as `bytes.Buffer` requires multiple grows/allocs to read in long strings.
In addition, preallocating the array slices to prevent underlying allocs to append to the slice.
I observed real world performance improvements as well as by using a small microbenchmark reading phi3 4Q from disk (file was cached by the OS) I see the following improvements:
```
goos: linux
goarch: amd64
pkg: github.com/ollama/ollama/llm
cpu: 12th Gen Intel(R) Core(TM) i7-1260P
│ old.out │ new.out │
│ sec/op │ sec/op vs base │
Decode-16 63.57m ± 2% 35.28m ± 2% -44.50% (p=0.000 n=10)
│ old.out │ new.out │
│ B/op │ B/op vs base │
Decode-16 58.691Mi ± 0% 3.195Mi ± 0% -94.56% (p=0.000 n=10)
│ old.out │ new.out │
│ allocs/op │ allocs/op vs base │
Decode-16 324.4k ± 0% 227.1k ± 0% -29.97% (p=0.000 n=10)
```
Results my vary depending on disk performance and whether the file was cached by the OS.
I may dig around for performance improvements in the future.
- Using a shared bytes.buffer for loading in strings to cut down on allocs.
- Using bufio.Reader to buffer reads from disk.
- Prevent repeated allocs of reading small datatypes by using a reading method that allows you to specify an existing buffer.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3879/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7954
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7954/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7954/comments
|
https://api.github.com/repos/ollama/ollama/issues/7954/events
|
https://github.com/ollama/ollama/pull/7954
| 2,721,208,460
|
PR_kwDOJ0Z1Ps6EOfng
| 7,954
|
wip: next ollama runner build updates
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-12-05T19:15:47
| 2025-01-16T17:34:46
| 2025-01-16T17:34:46
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7954",
"html_url": "https://github.com/ollama/ollama/pull/7954",
"diff_url": "https://github.com/ollama/ollama/pull/7954.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7954.patch",
"merged_at": null
}
|
Carries #7499 and adjusts the layout for the new runner
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7954/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5078
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5078/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5078/comments
|
https://api.github.com/repos/ollama/ollama/issues/5078/events
|
https://github.com/ollama/ollama/pull/5078
| 2,355,788,027
|
PR_kwDOJ0Z1Ps5ynHod
| 5,078
|
Add Chinese translation of README
|
{
"login": "sumingcheng",
"id": 21992204,
"node_id": "MDQ6VXNlcjIxOTkyMjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21992204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumingcheng",
"html_url": "https://github.com/sumingcheng",
"followers_url": "https://api.github.com/users/sumingcheng/followers",
"following_url": "https://api.github.com/users/sumingcheng/following{/other_user}",
"gists_url": "https://api.github.com/users/sumingcheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumingcheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumingcheng/subscriptions",
"organizations_url": "https://api.github.com/users/sumingcheng/orgs",
"repos_url": "https://api.github.com/users/sumingcheng/repos",
"events_url": "https://api.github.com/users/sumingcheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumingcheng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-16T13:59:57
| 2024-06-16T14:00:48
| 2024-06-16T14:00:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5078",
"html_url": "https://github.com/ollama/ollama/pull/5078",
"diff_url": "https://github.com/ollama/ollama/pull/5078.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5078.patch",
"merged_at": null
}
|
This pull request adds a Chinese translation of the README file to help native Chinese speakers better understand the project.
|
{
"login": "sumingcheng",
"id": 21992204,
"node_id": "MDQ6VXNlcjIxOTkyMjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21992204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumingcheng",
"html_url": "https://github.com/sumingcheng",
"followers_url": "https://api.github.com/users/sumingcheng/followers",
"following_url": "https://api.github.com/users/sumingcheng/following{/other_user}",
"gists_url": "https://api.github.com/users/sumingcheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumingcheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumingcheng/subscriptions",
"organizations_url": "https://api.github.com/users/sumingcheng/orgs",
"repos_url": "https://api.github.com/users/sumingcheng/repos",
"events_url": "https://api.github.com/users/sumingcheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumingcheng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5078/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7033
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7033/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7033/comments
|
https://api.github.com/repos/ollama/ollama/issues/7033/events
|
https://github.com/ollama/ollama/issues/7033
| 2,554,854,330
|
I_kwDOJ0Z1Ps6YR_u6
| 7,033
|
Using smaller context size shows CUDA error: CUBLAS_STATUS_NOT_INITIALIZED
|
{
"login": "aamsur-933",
"id": 74174455,
"node_id": "MDQ6VXNlcjc0MTc0NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/74174455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aamsur-933",
"html_url": "https://github.com/aamsur-933",
"followers_url": "https://api.github.com/users/aamsur-933/followers",
"following_url": "https://api.github.com/users/aamsur-933/following{/other_user}",
"gists_url": "https://api.github.com/users/aamsur-933/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aamsur-933/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aamsur-933/subscriptions",
"organizations_url": "https://api.github.com/users/aamsur-933/orgs",
"repos_url": "https://api.github.com/users/aamsur-933/repos",
"events_url": "https://api.github.com/users/aamsur-933/events{/privacy}",
"received_events_url": "https://api.github.com/users/aamsur-933/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-29T10:15:36
| 2025-01-06T07:40:36
| 2025-01-06T07:40:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Helo, i have PC with NVIDIA 10GB Vram, i have installed ollama and **deepseek-coder-v2:16b** model on it, i will use continue.dev in my vscode to communicate with ollama.
i have problem when i set ctx size in the plugin to 4K like so `"contextLength": 4096, ` with deepseek-coder-v2:16b i got `CUDA error: CUBLAS_STATUS_NOT_INITIALIZED` in ollama logs.
here is the logs from ollama:
[ctx-4k.log](https://github.com/user-attachments/files/17178823/ctx-4k.log)
but, when i set bigger ctx size to 32K in the plugin like so `"contextLength": 32768, ` the model will running without any error.

here is the logs:
[ctx-32k.log](https://github.com/user-attachments/files/17178843/ctx-32k.log)
Is there any problem with my env?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7033/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1038
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1038/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1038/comments
|
https://api.github.com/repos/ollama/ollama/issues/1038/events
|
https://github.com/ollama/ollama/pull/1038
| 1,982,633,921
|
PR_kwDOJ0Z1Ps5e4AeT
| 1,038
|
Response preamble for interactive terminal
|
{
"login": "eyelight",
"id": 225149,
"node_id": "MDQ6VXNlcjIyNTE0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/225149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyelight",
"html_url": "https://github.com/eyelight",
"followers_url": "https://api.github.com/users/eyelight/followers",
"following_url": "https://api.github.com/users/eyelight/following{/other_user}",
"gists_url": "https://api.github.com/users/eyelight/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyelight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyelight/subscriptions",
"organizations_url": "https://api.github.com/users/eyelight/orgs",
"repos_url": "https://api.github.com/users/eyelight/repos",
"events_url": "https://api.github.com/users/eyelight/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyelight/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-08T03:03:25
| 2023-11-09T00:50:39
| 2023-11-09T00:50:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1038",
"html_url": "https://github.com/ollama/ollama/pull/1038",
"diff_url": "https://github.com/ollama/ollama/pull/1038.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1038.patch",
"merged_at": null
}
|
This PR updates the interactive terminal experience to:
- print the active model just above the model's output
- provide `/set preamble` and `/set nopreamble` to turn this behavior on & off
- in both cases, adds an extra line to separate prompt & response

|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1038/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1955
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1955/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1955/comments
|
https://api.github.com/repos/ollama/ollama/issues/1955/events
|
https://github.com/ollama/ollama/issues/1955
| 2,079,203,600
|
I_kwDOJ0Z1Ps577iEQ
| 1,955
|
`WARNING: failed to allocate 4096.02 MB of pinned memory: out of memory`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-01-12T16:13:34
| 2024-05-02T21:34:13
| 2024-05-02T21:34:13
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This warning causes inference to get slower
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1955/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4755
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4755/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4755/comments
|
https://api.github.com/repos/ollama/ollama/issues/4755/events
|
https://github.com/ollama/ollama/issues/4755
| 2,328,297,746
|
I_kwDOJ0Z1Ps6KxwES
| 4,755
|
(windows) ollama model download will not keep on downloading when reopen ollama
|
{
"login": "waldolin",
"id": 20750014,
"node_id": "MDQ6VXNlcjIwNzUwMDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/20750014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/waldolin",
"html_url": "https://github.com/waldolin",
"followers_url": "https://api.github.com/users/waldolin/followers",
"following_url": "https://api.github.com/users/waldolin/following{/other_user}",
"gists_url": "https://api.github.com/users/waldolin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/waldolin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/waldolin/subscriptions",
"organizations_url": "https://api.github.com/users/waldolin/orgs",
"repos_url": "https://api.github.com/users/waldolin/repos",
"events_url": "https://api.github.com/users/waldolin/events{/privacy}",
"received_events_url": "https://api.github.com/users/waldolin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-31T17:33:13
| 2024-05-31T18:35:57
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
## ollama model download will not keep on downloading when reopen ollama or close ollama accidentally.
```
C:\Users\lin\AppData\Local\Ollama>ollama run gemma:7b
pulling manifest
pulling ef311de6af9d... 70% ▕███████████████████████████████████████ ▏ 3.5 GB/5.0 GB 3.5 MB/s 7m9s
Error: Post "http://127.0.0.1:11434/api/show": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.
```
By the way,
can you help me figure out the problem
i have created "modelfile"
```
FROM D:\Users\lin\.cache\lm-studio\models\MaziyarPanahi\Meta-Llama-3-70B-Instruct-GGUF\Meta-Llama-3-70B-Instruct.Q3_K_M.gguf
FROM D:\Users\lin\.cache\lm-studio\models\mradermacher\CodeLlama3-8B-Python-GGUF\CodeLlama3-8B-Python.f16.gguf
FROM D:\Users\lin\.cache\lm-studio\models\nctu6\Llama3-TAIDE-LX-8B-Chat-Alpha1-GGUF\Llama3-TAIDE-LX-8B-Chat-Alpha1-Q3_K_S.gguf
```
`ollama run example`
transferring model data
```
using existing layer sha256:ffc76ff74022adb94c91442a6eea9a19d3f3568afdc79f03b82b848ff32d81a8
using existing layer sha256:2fef7d258c60b8ef793960004a61f9f0b87723e7ecbc610221efc0cdbe0bc46a
using existing layer sha256:e03488e99c59505264d1f0ff0fc33559e0ea5cd2c05744afbcf9bb485ad82e86
creating new layer sha256:7ca37b96018a295573217abe25dbc2f74318ae156f00cd457c322d0c37f94cc5
writing manifest
success
```
run
`ollama create example -f Modelfile`
the guide is not clear to understand
"https://github.com/ollama/ollama/blob/main/docs/import.md"
i have download some files, like Meta-Llama-3-70B-Instruct.Q3_K_M.gguf
and how to set the road or configure of using it in common share file?
not creating file
create in here?
"C:\Users\lin\AppData\Local\Ollama"
how to set model from "C:\Users\lin.ollama\models" to
"C:\Users\lin.cache\lm-studio\models"
What OS are you running the ollama server on? windows 11 23H2
What version of Ollama are you using? ollama version is 0.1.39
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4755/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7590
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7590/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7590/comments
|
https://api.github.com/repos/ollama/ollama/issues/7590/events
|
https://github.com/ollama/ollama/issues/7590
| 2,646,538,915
|
I_kwDOJ0Z1Ps6dvvqj
| 7,590
|
GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed
|
{
"login": "Volker-Weissmann",
"id": 39418860,
"node_id": "MDQ6VXNlcjM5NDE4ODYw",
"avatar_url": "https://avatars.githubusercontent.com/u/39418860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Volker-Weissmann",
"html_url": "https://github.com/Volker-Weissmann",
"followers_url": "https://api.github.com/users/Volker-Weissmann/followers",
"following_url": "https://api.github.com/users/Volker-Weissmann/following{/other_user}",
"gists_url": "https://api.github.com/users/Volker-Weissmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Volker-Weissmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Volker-Weissmann/subscriptions",
"organizations_url": "https://api.github.com/users/Volker-Weissmann/orgs",
"repos_url": "https://api.github.com/users/Volker-Weissmann/repos",
"events_url": "https://api.github.com/users/Volker-Weissmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/Volker-Weissmann/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 19
| 2024-11-09T20:56:01
| 2024-12-08T07:50:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
If I try to run the `llama3.2-vision` model using `ollama run llama3.2-vision` on my Arch Linux machine, I get this error:
```
Error: llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed
```
`ollama run llama3.2` and `ollama run llava` works fine.
I have an i7-6700K and a GeForce GTX 1060 6GB. I installed ollama using `pacman -S ollama`
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7590/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7590/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8414
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8414/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8414/comments
|
https://api.github.com/repos/ollama/ollama/issues/8414/events
|
https://github.com/ollama/ollama/issues/8414
| 2,786,257,669
|
I_kwDOJ0Z1Ps6mEusF
| 8,414
|
[Feature] Support Intel GPUs
|
{
"login": "NeoZhangJianyu",
"id": 46982523,
"node_id": "MDQ6VXNlcjQ2OTgyNTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/46982523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NeoZhangJianyu",
"html_url": "https://github.com/NeoZhangJianyu",
"followers_url": "https://api.github.com/users/NeoZhangJianyu/followers",
"following_url": "https://api.github.com/users/NeoZhangJianyu/following{/other_user}",
"gists_url": "https://api.github.com/users/NeoZhangJianyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NeoZhangJianyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeoZhangJianyu/subscriptions",
"organizations_url": "https://api.github.com/users/NeoZhangJianyu/orgs",
"repos_url": "https://api.github.com/users/NeoZhangJianyu/repos",
"events_url": "https://api.github.com/users/NeoZhangJianyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/NeoZhangJianyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 3
| 2025-01-14T04:58:03
| 2025-01-14T06:12:47
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama had supported by the PR https://github.com/ollama/ollama/pull/2458 merged to support Intel GPU.
But the function disappears now.
I see there are several issues and opened PRs for Intel GPU. But they are too old.
I want to draft PRs to support Intel GPU: dGPU & iGPU (since 11th Core) by including llama.cpp SYCL backend.
This issue is created to trace the development work and reduce the duplicated work in the future.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8414/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8414/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7618
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7618/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7618/comments
|
https://api.github.com/repos/ollama/ollama/issues/7618/events
|
https://github.com/ollama/ollama/issues/7618
| 2,648,549,764
|
I_kwDOJ0Z1Ps6d3amE
| 7,618
|
llama runner process has terminated: signal: segmentation fault (core dumped)
|
{
"login": "Dhruv-1212",
"id": 132161275,
"node_id": "U_kgDOB-Ce-w",
"avatar_url": "https://avatars.githubusercontent.com/u/132161275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dhruv-1212",
"html_url": "https://github.com/Dhruv-1212",
"followers_url": "https://api.github.com/users/Dhruv-1212/followers",
"following_url": "https://api.github.com/users/Dhruv-1212/following{/other_user}",
"gists_url": "https://api.github.com/users/Dhruv-1212/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dhruv-1212/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dhruv-1212/subscriptions",
"organizations_url": "https://api.github.com/users/Dhruv-1212/orgs",
"repos_url": "https://api.github.com/users/Dhruv-1212/repos",
"events_url": "https://api.github.com/users/Dhruv-1212/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dhruv-1212/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-11-11T08:24:55
| 2024-11-12T09:36:11
| 2024-11-12T09:36:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
segmentation fault (core dumped) error for snowflake-arctic-embed:latest, other models are working fine
these are the system logs
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.252Z level=INFO source=server.go:108 msg="system memory" total="29.4 GiB" free="26.9 GiB" free_swap="0 B"
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.253Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[26.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="705.4 MiB" memory.required.partial="0 B" memory.required.kv="12.0 MiB" memory.required.allocations="[705.4 MiB]" memory.weights.total="589.2 MiB" memory.weights.repeating="529.6 MiB" memory.weights.nonrepeating="59.6 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB"
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.255Z level=INFO source=server.go:399 msg="starting llama server" cmd="/tmp/ollama1595154785/runners/cpu_avx2/ollama_llama_server --model /var/snap/ollama/common/models/blobs/sha256-fb3b66c7bdf6dabbb2edbc22627f4cb2df021c9e9545b54feafd8a7c09fe8ec5 --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1 --port 35273"
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.256Z level=INFO source=sched.go:449 msg="loaded runners" count=1
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.256Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.256Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[13066]: INFO [main] starting c++ runner | tid="134564798229440" timestamp=1731313564
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[13066]: INFO [main] build info | build=10 commit="3cd3d45b" tid="134564798229440" timestamp=1731313564
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[13066]: INFO [main] system info | n_threads=4 n_threads_batch=4 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="134564798229440" timestamp=1731313564 total_threads=8
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[13066]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="35273" tid="134564798229440" timestamp=1731313564
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: loaded meta data with 20 key-value pairs and 389 tensors from /var/snap/ollama/common/models/blobs/sha256-fb3b66c7bdf6dabbb2edbc22627f4cb2df021c9e9545b54feafd8a7c09fe8ec5 (version GGUF V3 (latest))
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 0: general.architecture str = bert
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 1: general.name str = snowflake-arctic-embed-l
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 2: bert.block_count u32 = 24
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 3: bert.context_length u32 = 512
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 4: bert.embedding_length u32 = 1024
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 5: bert.feed_forward_length u32 = 4096
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 6: bert.attention.head_count u32 = 16
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 7: bert.attention.layer_norm_epsilon f32 = 0.000000
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 8: general.file_type u32 = 1
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 9: bert.attention.causal bool = false
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 10: bert.pooling_type u32 = 2
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 11: tokenizer.ggml.token_type_count u32 = 2
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 12: tokenizer.ggml.model str = bert
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "...
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 15: tokenizer.ggml.unknown_token_id u32 = 100
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 16: tokenizer.ggml.seperator_token_id u32 = 102
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 18: tokenizer.ggml.cls_token_id u32 = 101
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - kv 19: tokenizer.ggml.mask_token_id u32 = 103
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - type f32: 243 tensors
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_model_loader: - type f16: 146 tensors
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_vocab: special tokens cache size = 5
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_vocab: token to piece cache size = 0.2032 MB
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: format = GGUF V3 (latest)
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: arch = bert
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: vocab type = WPM
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_vocab = 30522
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_merges = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: vocab_only = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_ctx_train = 512
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd = 1024
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_layer = 24
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_head = 16
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_head_kv = 16
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_rot = 64
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_swa = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_head_k = 64
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_head_v = 64
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_gqa = 1
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_k_gqa = 1024
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_embd_v_gqa = 1024
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_norm_eps = 1.0e-12
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: f_logit_scale = 0.0e+00
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_ff = 4096
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_expert = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_expert_used = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: causal attn = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: pooling type = 2
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: rope type = 2
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: rope scaling = linear
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: freq_base_train = 10000.0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: freq_scale_train = 1
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: n_ctx_orig_yarn = 512
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: rope_finetuned = unknown
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_d_conv = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_d_inner = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_d_state = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_dt_rank = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: model type = 335M
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: model ftype = F16
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: model params = 334.09 M
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: model size = 637.85 MiB (16.02 BPW)
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: general.name = snowflake-arctic-embed-l
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: UNK token = 100 '[UNK]'
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: SEP token = 102 '[SEP]'
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: PAD token = 0 '[PAD]'
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: CLS token = 101 '[CLS]'
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: MASK token = 103 '[MASK]'
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: LF token = 0 '[PAD]'
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_print_meta: max token length = 21
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_tensors: ggml ctx size = 0.16 MiB
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llm_load_tensors: CPU buffer size = 637.85 MiB
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.508Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: n_ctx = 2048
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: n_batch = 512
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: n_ubatch = 512
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: flash_attn = 0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: freq_base = 10000.0
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: freq_scale = 1
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_kv_cache_init: CPU KV buffer size = 192.00 MiB
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: KV self size = 192.00 MiB, K (f16): 96.00 MiB, V (f16): 96.00 MiB
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: CPU output buffer size = 0.00 MiB
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: CPU compute buffer size = 25.01 MiB
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: graph nodes = 849
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: llama_new_context_with_model: graph splits = 1
Nov 11 08:26:04 dev-_-aiml-reco kernel: ollama_llama_se[13066]: segfault at 7a629d9ff820 ip 00007a62cf570dc8 sp 00007fff67c94298 error 4 in libggml.so[7a62cf56e000+98000] likely on CPU 3 (core 3, socket 0)
Nov 11 08:26:04 dev-_-aiml-reco kernel: Code: 00 00 f3 0f 1e fa e9 77 ff ff ff 0f 1f 80 00 00 00 00 f3 0f 1e fa 48 85 d2 7e 27 4c 8b 05 00 52 0b 00 31 c0 66 0f 1f 44 00 00 <0f> b7 0c 47 c4 c1 7a 10 04 88 c5 fa 11 04 86 48 83 c0 01 48 39 c2
Nov 11 08:26:05 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:05.032Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
Nov 11 08:26:05 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:05.282Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: signal: segmentation fault (core dumped)"
Nov 11 08:26:05 dev-_-aiml-reco ollama.listener[1455]: [GIN] 2024/11/11 - 08:26:05 | 500 | 1.038579932s | 127.0.0.1 | POST "/api/embeddings"
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.0.0
|
{
"login": "Dhruv-1212",
"id": 132161275,
"node_id": "U_kgDOB-Ce-w",
"avatar_url": "https://avatars.githubusercontent.com/u/132161275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dhruv-1212",
"html_url": "https://github.com/Dhruv-1212",
"followers_url": "https://api.github.com/users/Dhruv-1212/followers",
"following_url": "https://api.github.com/users/Dhruv-1212/following{/other_user}",
"gists_url": "https://api.github.com/users/Dhruv-1212/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dhruv-1212/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dhruv-1212/subscriptions",
"organizations_url": "https://api.github.com/users/Dhruv-1212/orgs",
"repos_url": "https://api.github.com/users/Dhruv-1212/repos",
"events_url": "https://api.github.com/users/Dhruv-1212/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dhruv-1212/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7618/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7531
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7531/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7531/comments
|
https://api.github.com/repos/ollama/ollama/issues/7531/events
|
https://github.com/ollama/ollama/issues/7531
| 2,639,156,967
|
I_kwDOJ0Z1Ps6dTlbn
| 7,531
|
Poor acceleration choices with mixed GPUs
|
{
"login": "cobrafast",
"id": 3317555,
"node_id": "MDQ6VXNlcjMzMTc1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3317555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cobrafast",
"html_url": "https://github.com/cobrafast",
"followers_url": "https://api.github.com/users/cobrafast/followers",
"following_url": "https://api.github.com/users/cobrafast/following{/other_user}",
"gists_url": "https://api.github.com/users/cobrafast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cobrafast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cobrafast/subscriptions",
"organizations_url": "https://api.github.com/users/cobrafast/orgs",
"repos_url": "https://api.github.com/users/cobrafast/repos",
"events_url": "https://api.github.com/users/cobrafast/events{/privacy}",
"received_events_url": "https://api.github.com/users/cobrafast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-11-06T20:31:57
| 2024-11-08T19:28:09
| 2024-11-08T19:28:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've noticed is that Ollama makes poor decisions about acceleration in setups with heterogenous GPUs. I for example have a 16GB VRAM and a 3GB VRAM dGPU in my desktop PC and Ollama seems to only consider the smaller VRAM GPU, even if I set up `CUDA_VISIBLE_DEVICES=0` to only let it compute on the bigger one.
```
time=2024-11-03T22:28:58.806+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-03T22:28:58.806+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2024-11-03T22:28:58.806+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2024-11-03T22:28:59.090+01:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-3392b891-9899-c4e1-5fff-f56fe0c463c5 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4080" total="16.0 GiB" available="14.7 GiB"
...
time=2024-11-06T21:16:40.820+01:00 level=INFO source=server.go:105 msg="system memory" total="127.7 GiB" free="85.7 GiB" free_swap="96.8 GiB"
time=2024-11-06T21:16:40.821+01:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=2 layers.split="" memory.available="[914.5 MiB]" memory.gpu_overhead="0 B" memory.required.full="3.4 GiB" memory.required.partial="839.3 MiB" memory.required.kv="768.0 MiB" memory.required.allocations="[839.3 MiB]" memory.weights.total="2.6 GiB" memory.weights.repeating="2.6 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="128.0 MiB" memory.graph.partial="128.0 MiB"
...
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9, VMM: yes
```
If I'm reading this right, then Ollama thinks theres 839 MiB of available VRAM, which seems to be correct for the smaller GPU, but the bigger one should have some ~15 GiB available that don't seem to get considered at all.
This seems to make Ollama split the model between CPU and GPU, or run on CPU entirely.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.14
|
{
"login": "cobrafast",
"id": 3317555,
"node_id": "MDQ6VXNlcjMzMTc1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3317555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cobrafast",
"html_url": "https://github.com/cobrafast",
"followers_url": "https://api.github.com/users/cobrafast/followers",
"following_url": "https://api.github.com/users/cobrafast/following{/other_user}",
"gists_url": "https://api.github.com/users/cobrafast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cobrafast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cobrafast/subscriptions",
"organizations_url": "https://api.github.com/users/cobrafast/orgs",
"repos_url": "https://api.github.com/users/cobrafast/repos",
"events_url": "https://api.github.com/users/cobrafast/events{/privacy}",
"received_events_url": "https://api.github.com/users/cobrafast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7531/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2608
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2608/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2608/comments
|
https://api.github.com/repos/ollama/ollama/issues/2608/events
|
https://github.com/ollama/ollama/issues/2608
| 2,143,657,320
|
I_kwDOJ0Z1Ps5_xZ1o
| 2,608
|
How to identify multimodal models?
|
{
"login": "gluonfield",
"id": 5672094,
"node_id": "MDQ6VXNlcjU2NzIwOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5672094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gluonfield",
"html_url": "https://github.com/gluonfield",
"followers_url": "https://api.github.com/users/gluonfield/followers",
"following_url": "https://api.github.com/users/gluonfield/following{/other_user}",
"gists_url": "https://api.github.com/users/gluonfield/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gluonfield/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gluonfield/subscriptions",
"organizations_url": "https://api.github.com/users/gluonfield/orgs",
"repos_url": "https://api.github.com/users/gluonfield/repos",
"events_url": "https://api.github.com/users/gluonfield/events{/privacy}",
"received_events_url": "https://api.github.com/users/gluonfield/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-02-20T06:49:04
| 2024-02-20T17:48:41
| 2024-02-20T17:48:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi guys, incredible work with Ollama!
I'm building client for Ollama and wondering what is the best way to identify multimodal models like `llava`, `bakllava` from the API? I want to display additional UI if model supports images.
It seems that both `llava` and `bakllava` returns `/api/tags` response containing families `clip`
```json
{
...
"details": {
"families": ["clip"],
}
}
```
Should `clip` be associated with model's image support?
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2608/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3165
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3165/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3165/comments
|
https://api.github.com/repos/ollama/ollama/issues/3165/events
|
https://github.com/ollama/ollama/issues/3165
| 2,188,108,288
|
I_kwDOJ0Z1Ps6Ca-IA
| 3,165
|
Support "tool" role in messages
|
{
"login": "lebrunel",
"id": 124721263,
"node_id": "U_kgDOB28Ybw",
"avatar_url": "https://avatars.githubusercontent.com/u/124721263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lebrunel",
"html_url": "https://github.com/lebrunel",
"followers_url": "https://api.github.com/users/lebrunel/followers",
"following_url": "https://api.github.com/users/lebrunel/following{/other_user}",
"gists_url": "https://api.github.com/users/lebrunel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lebrunel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lebrunel/subscriptions",
"organizations_url": "https://api.github.com/users/lebrunel/orgs",
"repos_url": "https://api.github.com/users/lebrunel/repos",
"events_url": "https://api.github.com/users/lebrunel/events{/privacy}",
"received_events_url": "https://api.github.com/users/lebrunel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-03-15T09:42:21
| 2024-07-26T00:46:25
| 2024-07-26T00:46:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
The new Herme 2 Pro model recommends results from function calling to come in messages with the role "tool", eg:
```
<|im_start|>tool
<tool_response>...result here...</tool_response>
<|im_end|>
```
The chat api doesn't support messages with a role "tool" - treats it as a bad request.
### How should we solve this?
Accepting the "tool" role in messages API would make life easier for those using Hermes Pro - and any future models that are likely to be based off the same open data sets.
This has implications for how templating works in Ollama - it may even require a total rethink of templates.
### What is the impact of not solving this?
Not solving it means that users have to create their own templates and use Ollama's `raw` option, which negates some of the joy of using Ollama in the first place.
I truly believe function calling and building local agents is one area where Ollama really could excel, if the experience of doing so is made totally painless.
### Anything else?
Repo of Hermes Pro function calling with prompting/templating instructions:
https://github.com/NousResearch/Hermes-Function-Calling
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3165/reactions",
"total_count": 35,
"+1": 32,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/ollama/ollama/issues/3165/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3862
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3862/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3862/comments
|
https://api.github.com/repos/ollama/ollama/issues/3862/events
|
https://github.com/ollama/ollama/issues/3862
| 2,260,214,188
|
I_kwDOJ0Z1Ps6GuCGs
| 3,862
|
Please use comfyUI-like to realize Automatic Programming?
|
{
"login": "qwas982",
"id": 10122306,
"node_id": "MDQ6VXNlcjEwMTIyMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/10122306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qwas982",
"html_url": "https://github.com/qwas982",
"followers_url": "https://api.github.com/users/qwas982/followers",
"following_url": "https://api.github.com/users/qwas982/following{/other_user}",
"gists_url": "https://api.github.com/users/qwas982/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qwas982/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qwas982/subscriptions",
"organizations_url": "https://api.github.com/users/qwas982/orgs",
"repos_url": "https://api.github.com/users/qwas982/repos",
"events_url": "https://api.github.com/users/qwas982/events{/privacy}",
"received_events_url": "https://api.github.com/users/qwas982/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-04-24T03:20:59
| 2024-04-24T03:20:59
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
auto coding.
[https://raw.githubusercontent.com/comfyanonymous/ComfyUI/master/comfyui_screenshot.png](url)
like Andrew Ng say, The Agentic Workflow much stronger than the original GPT3.5,
I know that ollama can be called using API. The problem now is how to implement the agent workflow on the UI and complete the human-computer interaction of automatic coding, so I thought of the connection method ComfyUI,
Perhaps you also need a window to communicate with the large model, a window to return codes and a terminal window.
This is an initial, relatively vague idea about how to design automatic coding.
Does Agentic Workflow still need an intermediate layer like ollama for communication between UI and large model?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3862/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2772
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2772/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2772/comments
|
https://api.github.com/repos/ollama/ollama/issues/2772/events
|
https://github.com/ollama/ollama/pull/2772
| 2,155,466,408
|
PR_kwDOJ0Z1Ps5n_CFa
| 2,772
|
Refine container image build script
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-27T01:26:58
| 2024-02-27T19:29:11
| 2024-02-27T19:29:08
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2772",
"html_url": "https://github.com/ollama/ollama/pull/2772",
"diff_url": "https://github.com/ollama/ollama/pull/2772.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2772.patch",
"merged_at": "2024-02-27T19:29:08"
}
|
Allow overriding the platform, image name, and tag latest for standard and rocm images.
Fixes #2721
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2772/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7576
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7576/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7576/comments
|
https://api.github.com/repos/ollama/ollama/issues/7576/events
|
https://github.com/ollama/ollama/issues/7576
| 2,644,502,171
|
I_kwDOJ0Z1Ps6dn-ab
| 7,576
|
num_ctx causes 100% CPU with no GPU usage
|
{
"login": "aaronbolton",
"id": 18211890,
"node_id": "MDQ6VXNlcjE4MjExODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/18211890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronbolton",
"html_url": "https://github.com/aaronbolton",
"followers_url": "https://api.github.com/users/aaronbolton/followers",
"following_url": "https://api.github.com/users/aaronbolton/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronbolton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronbolton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronbolton/subscriptions",
"organizations_url": "https://api.github.com/users/aaronbolton/orgs",
"repos_url": "https://api.github.com/users/aaronbolton/repos",
"events_url": "https://api.github.com/users/aaronbolton/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronbolton/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-11-08T16:22:48
| 2024-12-29T06:16:17
| 2024-11-12T18:43:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ive tried to create a new model recently with only the parameter num_ctx, when I run the models it shows 100% CPU with no GPU usage, even if the model was too big I would assume it would report GPU/CPU 100%/???%
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.0
|
{
"login": "aaronbolton",
"id": 18211890,
"node_id": "MDQ6VXNlcjE4MjExODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/18211890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronbolton",
"html_url": "https://github.com/aaronbolton",
"followers_url": "https://api.github.com/users/aaronbolton/followers",
"following_url": "https://api.github.com/users/aaronbolton/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronbolton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronbolton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronbolton/subscriptions",
"organizations_url": "https://api.github.com/users/aaronbolton/orgs",
"repos_url": "https://api.github.com/users/aaronbolton/repos",
"events_url": "https://api.github.com/users/aaronbolton/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronbolton/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7576/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/7576/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/724
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/724/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/724/comments
|
https://api.github.com/repos/ollama/ollama/issues/724/events
|
https://github.com/ollama/ollama/pull/724
| 1,930,906,854
|
PR_kwDOJ0Z1Ps5cJeKg
| 724
|
improve vram safety with 5% vram memory buffer
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-10-06T20:39:38
| 2023-10-13T13:27:28
| 2023-10-10T20:16:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/724",
"html_url": "https://github.com/ollama/ollama/pull/724",
"diff_url": "https://github.com/ollama/ollama/pull/724.diff",
"patch_url": "https://github.com/ollama/ollama/pull/724.patch",
"merged_at": "2023-10-10T20:16:09"
}
|
In testing how much VRAM should be allocated we typically used a model which could be entirely loaded into VRAM. This masked an issue when a model is larger than the available VRAM it is possible to consume all available VRAM and fail with an error:
```
Error: llama runner failed: out of memory
```
This change leaves a 10% buffer on available VRAM to prevent running out of memory.
Tested on a T4:
- `llama2:7b`: easily offloads all layers to GPU
- `llama2:13b`: easily offloads all layers to GPU
- `llama2:70b`: offloaded 29 layers to GPU, was slow but did not run out of memory on load (as it did before)
Resolves #725
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/724/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/724/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7579
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7579/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7579/comments
|
https://api.github.com/repos/ollama/ollama/issues/7579/events
|
https://github.com/ollama/ollama/pull/7579
| 2,644,647,728
|
PR_kwDOJ0Z1Ps6BWVAQ
| 7,579
|
Set macos min version for all architectures
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-08T17:16:32
| 2024-11-08T17:27:07
| 2024-11-08T17:27:04
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7579",
"html_url": "https://github.com/ollama/ollama/pull/7579",
"diff_url": "https://github.com/ollama/ollama/pull/7579.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7579.patch",
"merged_at": "2024-11-08T17:27:04"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7579/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/911
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/911/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/911/comments
|
https://api.github.com/repos/ollama/ollama/issues/911/events
|
https://github.com/ollama/ollama/issues/911
| 1,962,679,881
|
I_kwDOJ0Z1Ps50_B5J
| 911
|
When out of disk space, Ollama still retries to download
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2023-10-26T04:27:09
| 2023-10-26T19:24:22
| 2023-10-26T19:24:22
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
OLLAMA_HOST=https://redacted:443 ollama pull llama2:13b
pulling manifest
pulling 29fdb92e57cf... 5% |█ | (408 MB/7.4 GB, 99 MB/s) [4s:1m10s]Error: max retries exceeded
```
```
OLLAMA_HOST=https://redacted:443 ollama pull llama2:13b
pulling manifest
pulling 29fdb92e57cf... 7% |██ | (527 MB/7.4 GB, 112 MB/s) [4s:1m1s]Error: max retries exceeded
```
In ollama logs:
```
2023-10-26T04:20:50.961 app[48ed663be33558] ord [info] 2023/10/26 04:20:50 download.go:164: 29fdb92e57cf part 24 attempt 1 failed: write /root/.ollama/models/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2-partial: no space left on device, retrying
2023-10-26T04:20:50.969 app[48ed663be33558] ord [info] 2023/10/26 04:20:50 download.go:164: 29fdb92e57cf part 36 attempt 1 failed: write /root/.ollama/models/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2-partial: no space left on device, retrying
2023-10-26T04:20:50.975 app[48ed663be33558] ord [info] 2023/10/26 04:20:50 download.go:164: 29fdb92e57cf part 10 attempt 1 failed: write /root/.ollama/models/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2-partial: no space left on device, retrying
2023-10-26T04:20:50.975 app[48ed663be33558] ord [info] 2023/10/26 04:20:50 download.go:164: 29fdb92e57cf part 50 attempt 1 failed: write /root/.ollama/models/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2-partial: no space left on device, retrying
2023-10-26T04:20:51.070 app[48ed663be33558] ord [info] 2023/10/26 04:20:51 download.go:164: 29fdb92e57cf part 13 attempt 1 failed: write /root/.ollama/models/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2-partial: no space left on device, retrying
2023-10-26T04:20:51.115 app[48ed663be33558] ord [info] 2023/10/26 04:20:51 download.go:164: 29fdb92e57cf part 28 attempt 1 failed: write /root/.ollama/models/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2-partial: no space left on device, retrying
2023-10-26T04:20:51.185 app[48ed663be33558] ord [info] 2023/10/26 04:20:51 download.go:164: 29fdb92e57cf part 9 attempt 1 failed: write /root/.ollama/models/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2-partial: no space left on device, retrying
2023-10-26T04:20:51.207 app[48ed663be33558] ord [info] 2023/10/26 04:20:51 download.go:164: 29fdb92e57cf part 56 attempt 1 failed: write /root/.ollama/models/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2-partial: no space left on device, retrying
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/911/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/211
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/211/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/211/comments
|
https://api.github.com/repos/ollama/ollama/issues/211/events
|
https://github.com/ollama/ollama/pull/211
| 1,820,841,544
|
PR_kwDOJ0Z1Ps5WXETR
| 211
|
update llama.cpp
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-25T17:51:16
| 2023-07-27T23:57:04
| 2023-07-27T23:57:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/211",
"html_url": "https://github.com/ollama/ollama/pull/211",
"diff_url": "https://github.com/ollama/ollama/pull/211.diff",
"patch_url": "https://github.com/ollama/ollama/pull/211.patch",
"merged_at": "2023-07-27T23:57:03"
}
|
update to eb542d39324574a6778fad9ba9e34ba7a14a82a3
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/211/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2981
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2981/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2981/comments
|
https://api.github.com/repos/ollama/ollama/issues/2981/events
|
https://github.com/ollama/ollama/issues/2981
| 2,173,938,633
|
I_kwDOJ0Z1Ps6Bk6vJ
| 2,981
|
when i restart windows, ollama will open automatically, how can i close the self-start function?
|
{
"login": "08183080",
"id": 51738561,
"node_id": "MDQ6VXNlcjUxNzM4NTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/51738561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/08183080",
"html_url": "https://github.com/08183080",
"followers_url": "https://api.github.com/users/08183080/followers",
"following_url": "https://api.github.com/users/08183080/following{/other_user}",
"gists_url": "https://api.github.com/users/08183080/gists{/gist_id}",
"starred_url": "https://api.github.com/users/08183080/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/08183080/subscriptions",
"organizations_url": "https://api.github.com/users/08183080/orgs",
"repos_url": "https://api.github.com/users/08183080/repos",
"events_url": "https://api.github.com/users/08183080/events{/privacy}",
"received_events_url": "https://api.github.com/users/08183080/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-03-07T14:01:52
| 2024-04-15T21:59:43
| 2024-03-11T22:25:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
when i restart windows, ollama will open automatically, how can i close the self-start function?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2981/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
}
|
https://api.github.com/repos/ollama/ollama/issues/2981/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6282
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6282/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6282/comments
|
https://api.github.com/repos/ollama/ollama/issues/6282/events
|
https://github.com/ollama/ollama/pull/6282
| 2,457,641,055
|
PR_kwDOJ0Z1Ps537xTf
| 6,282
|
AMD integrated graphic on linux kernel 6.9.9+, GTT memory, loading freeze fix
|
{
"login": "MaciejMogilany",
"id": 56433591,
"node_id": "MDQ6VXNlcjU2NDMzNTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/56433591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaciejMogilany",
"html_url": "https://github.com/MaciejMogilany",
"followers_url": "https://api.github.com/users/MaciejMogilany/followers",
"following_url": "https://api.github.com/users/MaciejMogilany/following{/other_user}",
"gists_url": "https://api.github.com/users/MaciejMogilany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaciejMogilany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaciejMogilany/subscriptions",
"organizations_url": "https://api.github.com/users/MaciejMogilany/orgs",
"repos_url": "https://api.github.com/users/MaciejMogilany/repos",
"events_url": "https://api.github.com/users/MaciejMogilany/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaciejMogilany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 103
| 2024-08-09T10:43:59
| 2025-01-28T22:35:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6282",
"html_url": "https://github.com/ollama/ollama/pull/6282",
"diff_url": "https://github.com/ollama/ollama/pull/6282.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6282.patch",
"merged_at": null
}
|
This commit reflects changes in linux kernel 6.9.9+ on small APU. LLM load to GTT memory, which is set to 1/2 of RAM by default and can be changed. This allows to use bigger models with AMD APU without VRAM carveout and load models bigger than max VRAM carveout of 16GiB. No hacks like [torch-apu-helper](https://github.com/pomoke/torch-apu-helper), [force-host-alloction-APU](https://github.com/segurac/force-host-alloction-APU), [Rusticl](https://docs.mesa3d.org/rusticl.html), [unlock VRAM allocation](https://winstonhyypia.medium.com/amd-apu-how-to-modify-the-dedicated-gpu-memory-e27b75905056) are needed
APU-s this applied to
"gfx1103" //890m, 780m, 760m, 740m GPU RDNA3
"gfx1037" //610M GPU RDNA2
"gfx1035" //680m, 660m GPU RDNA2
"gfx1033" //Van Gogh RDNA2
"gfx1036" //RDNA2 APU
"gfx1151" //RDNA3+ APU
"gfx1152" //RDNA3+ APU
"gfx940" //MI300A CDNA3
"gfx90c" //Radeon Vega 7 Ryzen 5600G
commit also address problems with ollama and APU on kernel 6.9.9+:
- ollama server hang due to memory management issues
- [CPU tensor buffer causing OOM in linux](https://github.com/ollama/ollama/issues/2637#issuecomment-2306976825)
Changes are applied if only one graphic is preset, and it's from the list above (no discrete graphic card added) and kernel above 6.9.9 is used on the system. Existing discrete graphic functionality is not changed.
Note:
APU are not officially supported. Can be enabled
for 680m by:
export HSA_OVERRIDE_GFX_VERSION=10.3.0
export OLLAMA_MAX_LOADED_MODELS=1
export OLLAMA_NUM_PARALLEL=1
for 780m by:
export HSA_OVERRIDE_GFX_VERSION=11.0.1
export OLLAMA_MAX_LOADED_MODELS=1
export OLLAMA_NUM_PARALLEL=1
To mitigate gpu hang on unsusported rocm gpu use OLLAMA_MAX_LOADED_MODELS=1, OLLAMA_NUM_PARALLEL=1
Memory available to APU can be adjusted by editing /etc/modprobe.d/ttm.conf (in number of 4k pages , for 48Go it will be):
options ttm pages_limit=12582912
options ttm page_pool_size=12582912
[more info](https://github.com/ollama/ollama/issues/2637#issuecomment-2272913656)
fix issues https://github.com/ollama/ollama/issues/6362#issue-2466206599 https://github.com/ollama/ollama/issues/6572#issue-2498127452
Partially fix (ollama part) https://github.com/ollama/ollama/issues/2637#issue-2146959786
To test this:
```
git clone https://github.com/Maciej-Mogilany/ollama.git
cd ollama
git checkout AMD_APU_GTT_memory
make -j 5
export HSA_OVERRIDE_GFX_VERSION=11.0.1 // for 780m
sudo systemctl stop ollama // stop original ollama for now
./ollama serve
in another terminal
./ollama run model name
if all work, you many replace original ollama bin file with generated form source and add HSA_OVERRIDE_GFX_VERSION=11.0.1 to ollama service for convenience
sudo systemctl start ollama // start original ollama
```
If you have any problem, please ask sonet 3.5 about it. This way, you will be able to solve 95% of problems.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6282/reactions",
"total_count": 26,
"+1": 19,
"-1": 0,
"laugh": 0,
"hooray": 7,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6282/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5779
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5779/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5779/comments
|
https://api.github.com/repos/ollama/ollama/issues/5779/events
|
https://github.com/ollama/ollama/pull/5779
| 2,417,118,018
|
PR_kwDOJ0Z1Ps510Pa5
| 5,779
|
server: check for empty tools array too
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-18T18:43:28
| 2024-07-18T18:44:59
| 2024-07-18T18:44:58
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5779",
"html_url": "https://github.com/ollama/ollama/pull/5779",
"diff_url": "https://github.com/ollama/ollama/pull/5779.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5779.patch",
"merged_at": "2024-07-18T18:44:58"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5779/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7376
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7376/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7376/comments
|
https://api.github.com/repos/ollama/ollama/issues/7376/events
|
https://github.com/ollama/ollama/issues/7376
| 2,616,123,180
|
I_kwDOJ0Z1Ps6b7t8s
| 7,376
|
Is there a way to track tokens/context window in real-time?
|
{
"login": "robotom",
"id": 45123215,
"node_id": "MDQ6VXNlcjQ1MTIzMjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/45123215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robotom",
"html_url": "https://github.com/robotom",
"followers_url": "https://api.github.com/users/robotom/followers",
"following_url": "https://api.github.com/users/robotom/following{/other_user}",
"gists_url": "https://api.github.com/users/robotom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robotom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robotom/subscriptions",
"organizations_url": "https://api.github.com/users/robotom/orgs",
"repos_url": "https://api.github.com/users/robotom/repos",
"events_url": "https://api.github.com/users/robotom/events{/privacy}",
"received_events_url": "https://api.github.com/users/robotom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-26T20:20:44
| 2024-12-02T14:44:50
| 2024-12-02T14:44:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'd like to implement a counter in a front end app to track the tokens used in order to see if I'm close to exceeding the context window.
This is useful to me because if I feed a large document into the model, I'd like to know when it's "too large" and perhaps to break it down or do something else.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7376/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4704
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4704/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4704/comments
|
https://api.github.com/repos/ollama/ollama/issues/4704/events
|
https://github.com/ollama/ollama/issues/4704
| 2,323,425,194
|
I_kwDOJ0Z1Ps6KfKeq
| 4,704
|
msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "
|
{
"login": "wsry888",
"id": 21898282,
"node_id": "MDQ6VXNlcjIxODk4Mjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/21898282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wsry888",
"html_url": "https://github.com/wsry888",
"followers_url": "https://api.github.com/users/wsry888/followers",
"following_url": "https://api.github.com/users/wsry888/following{/other_user}",
"gists_url": "https://api.github.com/users/wsry888/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wsry888/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wsry888/subscriptions",
"organizations_url": "https://api.github.com/users/wsry888/orgs",
"repos_url": "https://api.github.com/users/wsry888/repos",
"events_url": "https://api.github.com/users/wsry888/events{/privacy}",
"received_events_url": "https://api.github.com/users/wsry888/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-29T14:05:06
| 2024-06-09T17:13:13
| 2024-06-09T17:13:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
### run hhao/openbmb-minicpm-llama3-v-2_5:fp16
msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "
time=2024-05-29T22:03:49.672+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="22.5 GiB" memory.required.full="16.2 GiB" memory.required.partial="16.2 GiB" memory.required.kv="256.0 MiB" memory.weights.total="14.0 GiB" memory.weights.repeating="13.0 GiB" memory.weights.nonrepeating="1002.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-05-29T22:03:49.676+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="22.5 GiB" memory.required.full="16.2 GiB" memory.required.partial="16.2 GiB" memory.required.kv="256.0 MiB" memory.weights.total="14.0 GiB" memory.weights.repeating="13.0 GiB" memory.weights.nonrepeating="1002.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-05-29T22:03:49.676+08:00 level=WARN source=server.go:227 msg="multimodal models don't support parallel requests yet"
time=2024-05-29T22:03:49.678+08:00 level=INFO source=server.go:338 msg="starting llama server" cmd="C:\\Users\\users\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model D:\\Ollamamodels\\blobs\\sha256-a7a6ce348ebc060ceb8aa973f3b0bad5d3007b7ced23228c0e1aeba59c1fb72f --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --mmproj D:\\Ollamamodels\\blobs\\sha256-391d11736c3cd24a90417c47b0c88975e86918fcddb1b00494c4d715b08af13e --parallel 1 --port 2796"
time=2024-05-29T22:03:49.679+08:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-05-29T22:03:49.679+08:00 level=INFO source=server.go:526 msg="waiting for llama runner to start responding"
time=2024-05-29T22:03:49.679+08:00 level=INFO source=server.go:564 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=2986 commit="74f33adf" tid="12112" timestamp=1716991429
INFO [wmain] system info | n_threads=14 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="12112" timestamp=1716991429 total_threads=28
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="27" port="2796" tid="12112" timestamp=1716991429
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\examples\llava\clip.cpp:1024: new_clip->has_llava_projector
time=2024-05-29T22:03:49.932+08:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "
[GIN] 2024/05/29 - 22:03:49 | 500 | 1.45787s | 192.168.2.33 | POST "/api/chat"
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.39
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4704/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5702
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5702/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5702/comments
|
https://api.github.com/repos/ollama/ollama/issues/5702/events
|
https://github.com/ollama/ollama/pull/5702
| 2,408,859,378
|
PR_kwDOJ0Z1Ps51ZLwm
| 5,702
|
Add sidellama link
|
{
"login": "gyopak",
"id": 25726935,
"node_id": "MDQ6VXNlcjI1NzI2OTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25726935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gyopak",
"html_url": "https://github.com/gyopak",
"followers_url": "https://api.github.com/users/gyopak/followers",
"following_url": "https://api.github.com/users/gyopak/following{/other_user}",
"gists_url": "https://api.github.com/users/gyopak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gyopak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gyopak/subscriptions",
"organizations_url": "https://api.github.com/users/gyopak/orgs",
"repos_url": "https://api.github.com/users/gyopak/repos",
"events_url": "https://api.github.com/users/gyopak/events{/privacy}",
"received_events_url": "https://api.github.com/users/gyopak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-15T14:16:19
| 2024-07-17T17:24:44
| 2024-07-17T17:24:44
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5702",
"html_url": "https://github.com/ollama/ollama/pull/5702",
"diff_url": "https://github.com/ollama/ollama/pull/5702.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5702.patch",
"merged_at": "2024-07-17T17:24:44"
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5702/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3638
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3638/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3638/comments
|
https://api.github.com/repos/ollama/ollama/issues/3638/events
|
https://github.com/ollama/ollama/issues/3638
| 2,242,196,195
|
I_kwDOJ0Z1Ps6FpTLj
| 3,638
|
Error: exception error loading model architecture: unknown model architecture: ''
|
{
"login": "anubissbe",
"id": 116725818,
"node_id": "U_kgDOBvUYOg",
"avatar_url": "https://avatars.githubusercontent.com/u/116725818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anubissbe",
"html_url": "https://github.com/anubissbe",
"followers_url": "https://api.github.com/users/anubissbe/followers",
"following_url": "https://api.github.com/users/anubissbe/following{/other_user}",
"gists_url": "https://api.github.com/users/anubissbe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anubissbe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anubissbe/subscriptions",
"organizations_url": "https://api.github.com/users/anubissbe/orgs",
"repos_url": "https://api.github.com/users/anubissbe/repos",
"events_url": "https://api.github.com/users/anubissbe/events{/privacy}",
"received_events_url": "https://api.github.com/users/anubissbe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-14T14:30:01
| 2024-04-17T00:47:49
| 2024-04-17T00:47:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The modelfile cannot get loaded after the model creation
### What did you expect to see?
A working chat interface
### Steps to reproduce
drwho@jarvis:/opt/models/aya-101-GGUF$ ollama create aya -f Modelfile
transferring model data
creating model layer
creating system layer
creating parameters layer
creating config layer
using already created layer sha256:dcf5054951605dfee65396ef3c625c09539c5f605256989bf9e605e9727a00d8
writing layer sha256:d8ba2f9a17b3bbdeb5690efaa409b3fcb0b56296a777c7a69c78aa33bbddf182
writing layer sha256:d8f76500493b5c8d3ca7146cefe969f5a3a7ed1a36dffdfb06feb49089d19d7c
writing manifest
success
drwho@jarvis:/opt/models/aya-101-GGUF$ ollama run aya
Error: exception error loading model architecture: unknown model architecture: ''
drwho@jarvis:/opt/models/aya-101-GGUF$ cat Modelfile
FROM ./aya-101.Q6_K.gguf
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system message
SYSTEM ""
drwho@jarvis:/opt/models/aya-101-GGUF$ ls -la
total 110839576
drwxrwxr-x 3 drwho drwho 4096 Apr 14 14:17 .
drwxr-xr-x 4 drwho users 4096 Apr 14 12:37 ..
-rw-rw-r-- 1 drwho drwho 4241619232 Apr 14 13:25 aya-101.Q2_K.gguf
-rw-rw-r-- 1 drwho drwho 5553862944 Apr 14 13:24 aya-101.Q3_K.gguf
-rw-rw-r-- 1 drwho drwho 7269859872 Apr 14 13:26 aya-101.Q4_0.gguf
-rw-rw-r-- 1 drwho drwho 8077394720 Apr 14 13:04 aya-101.Q4_1.gguf
-rw-rw-r-- 1 drwho drwho 7269873952 Apr 14 13:26 aya-101.Q4_K.gguf
-rw-rw-r-- 1 drwho drwho 8884929568 Apr 14 13:07 aya-101.Q5_0.gguf
-rw-rw-r-- 1 drwho drwho 9692464416 Apr 14 13:13 aya-101.Q5_1.gguf
-rw-rw-r-- 1 drwho drwho 8884943136 Apr 14 13:07 aya-101.Q5_K.gguf
-rw-rw-r-- 1 drwho drwho 10600954144 Apr 14 13:13 aya-101.Q6_K.gguf
-rw-rw-r-- 1 drwho drwho 13730138656 Apr 14 13:25 aya-101.Q8_0.gguf
-rw-rw-r-- 1 drwho drwho 14537673504 Apr 14 13:24 aya-101.Q8_1.gguf
-rw-rw-r-- 1 drwho drwho 14739568928 Apr 14 13:25 aya-101.Q8_K.gguf
-rw-rw-r-- 1 drwho drwho 761 Apr 14 14:07 config.json
drwxrwxr-x 9 drwho drwho 4096 Apr 14 13:37 .git
-rw-rw-r-- 1 drwho drwho 2218 Apr 14 12:37 .gitattributes
-rw-rw-r-- 1 drwho drwho 162 Apr 14 14:17 Modelfile
-rw-rw-r-- 1 drwho drwho 2270 Apr 14 12:37 README.md
-rw-rw-r-- 1 drwho drwho 833 Apr 14 12:37 tokenizer_config.json
-rw-rw-r-- 1 drwho drwho 16330562 Apr 14 13:13 tokenizer.json
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
_No response_
### GPU
Nvidia
### GPU info
V100-16GB
### CPU
Intel
### Other software
Xeon-Gold
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3638/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2687
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2687/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2687/comments
|
https://api.github.com/repos/ollama/ollama/issues/2687/events
|
https://github.com/ollama/ollama/issues/2687
| 2,149,493,793
|
I_kwDOJ0Z1Ps6AHqwh
| 2,687
|
update README to add Gemma 2B, 7B model in Model Library Table
|
{
"login": "adminazhar",
"id": 20738252,
"node_id": "MDQ6VXNlcjIwNzM4MjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/20738252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adminazhar",
"html_url": "https://github.com/adminazhar",
"followers_url": "https://api.github.com/users/adminazhar/followers",
"following_url": "https://api.github.com/users/adminazhar/following{/other_user}",
"gists_url": "https://api.github.com/users/adminazhar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adminazhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adminazhar/subscriptions",
"organizations_url": "https://api.github.com/users/adminazhar/orgs",
"repos_url": "https://api.github.com/users/adminazhar/repos",
"events_url": "https://api.github.com/users/adminazhar/events{/privacy}",
"received_events_url": "https://api.github.com/users/adminazhar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-22T16:47:55
| 2024-02-22T20:15:49
| 2024-02-22T20:15:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "adminazhar",
"id": 20738252,
"node_id": "MDQ6VXNlcjIwNzM4MjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/20738252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adminazhar",
"html_url": "https://github.com/adminazhar",
"followers_url": "https://api.github.com/users/adminazhar/followers",
"following_url": "https://api.github.com/users/adminazhar/following{/other_user}",
"gists_url": "https://api.github.com/users/adminazhar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adminazhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adminazhar/subscriptions",
"organizations_url": "https://api.github.com/users/adminazhar/orgs",
"repos_url": "https://api.github.com/users/adminazhar/repos",
"events_url": "https://api.github.com/users/adminazhar/events{/privacy}",
"received_events_url": "https://api.github.com/users/adminazhar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2687/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/591
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/591/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/591/comments
|
https://api.github.com/repos/ollama/ollama/issues/591/events
|
https://github.com/ollama/ollama/pull/591
| 1,912,108,173
|
PR_kwDOJ0Z1Ps5bKCub
| 591
|
unbound max num gpu layers
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-25T18:59:44
| 2023-09-25T22:36:47
| 2023-09-25T22:36:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/591",
"html_url": "https://github.com/ollama/ollama/pull/591",
"diff_url": "https://github.com/ollama/ollama/pull/591.diff",
"patch_url": "https://github.com/ollama/ollama/pull/591.patch",
"merged_at": "2023-09-25T22:36:46"
}
|
Load as many layers into VRAM as possible using model file size as a rough heuristic for the amount of memory required for a layer.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/591/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7547
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7547/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7547/comments
|
https://api.github.com/repos/ollama/ollama/issues/7547/events
|
https://github.com/ollama/ollama/issues/7547
| 2,640,324,207
|
I_kwDOJ0Z1Ps6dYCZv
| 7,547
|
Response returns 'null' for 'finish_reason'
|
{
"login": "debruyckere",
"id": 676943,
"node_id": "MDQ6VXNlcjY3Njk0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/676943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/debruyckere",
"html_url": "https://github.com/debruyckere",
"followers_url": "https://api.github.com/users/debruyckere/followers",
"following_url": "https://api.github.com/users/debruyckere/following{/other_user}",
"gists_url": "https://api.github.com/users/debruyckere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/debruyckere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/debruyckere/subscriptions",
"organizations_url": "https://api.github.com/users/debruyckere/orgs",
"repos_url": "https://api.github.com/users/debruyckere/repos",
"events_url": "https://api.github.com/users/debruyckere/events{/privacy}",
"received_events_url": "https://api.github.com/users/debruyckere/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-11-07T08:54:58
| 2024-11-18T17:16:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm using the OpenAI .Net library to connect to Ollama, using the default llama3.2 model. I get an "Unknown ChatFinishReason value." error from the library. You can see in below code from ChatFinishReasonExtensions (from OpenAI lib) that the value returned by Ollama is null.

The finish reason should apparently never be null. Note that this only happens for requests that time out. In normal use, the value 'stop' is returned which is parsed correctly.
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
ollama version is 0.3.14
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7547/timeline
| null |
reopened
| false
|
https://api.github.com/repos/ollama/ollama/issues/6819
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6819/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6819/comments
|
https://api.github.com/repos/ollama/ollama/issues/6819/events
|
https://github.com/ollama/ollama/issues/6819
| 2,527,405,707
|
I_kwDOJ0Z1Ps6WpSaL
| 6,819
|
Solar Pro
|
{
"login": "nonetrix",
"id": 45698918,
"node_id": "MDQ6VXNlcjQ1Njk4OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonetrix",
"html_url": "https://github.com/nonetrix",
"followers_url": "https://api.github.com/users/nonetrix/followers",
"following_url": "https://api.github.com/users/nonetrix/following{/other_user}",
"gists_url": "https://api.github.com/users/nonetrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nonetrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nonetrix/subscriptions",
"organizations_url": "https://api.github.com/users/nonetrix/orgs",
"repos_url": "https://api.github.com/users/nonetrix/repos",
"events_url": "https://api.github.com/users/nonetrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/nonetrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-16T03:03:35
| 2024-09-18T21:57:54
| 2024-09-18T21:57:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/upstage/solar-pro-preview-instruct
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6819/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6819/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4323
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4323/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4323/comments
|
https://api.github.com/repos/ollama/ollama/issues/4323/events
|
https://github.com/ollama/ollama/pull/4323
| 2,290,401,833
|
PR_kwDOJ0Z1Ps5vInU1
| 4,323
|
Always use the sorted list of GPUs
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-10T20:54:13
| 2024-05-10T21:12:17
| 2024-05-10T21:12:15
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4323",
"html_url": "https://github.com/ollama/ollama/pull/4323",
"diff_url": "https://github.com/ollama/ollama/pull/4323.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4323.patch",
"merged_at": "2024-05-10T21:12:15"
}
|
Make sure the first GPU has the most free space
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4323/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7268
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7268/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7268/comments
|
https://api.github.com/repos/ollama/ollama/issues/7268/events
|
https://github.com/ollama/ollama/issues/7268
| 2,598,920,711
|
I_kwDOJ0Z1Ps6a6GIH
| 7,268
|
fail to run ollama run hf-mirror.com/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF:Q8
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-10-19T08:56:04
| 2024-10-25T20:37:32
| 2024-10-23T01:44:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
taozhiyu@Mac ~ % ollama run hf-mirror.com/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF:Q8
pulling manifest
Error: pull model manifest: 400: The specified tag is not a valid quantization scheme. Please use another tag or "latest"
taozhiyu@Mac ~ % ollama run hf-mirror.com/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF:lastest
pulling manifest
Error: pull model manifest: 400: The specified tag is not a valid quantization scheme. Please use another tag or "latest"
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
ollama version is 0.3.13
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7268/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4489
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4489/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4489/comments
|
https://api.github.com/repos/ollama/ollama/issues/4489/events
|
https://github.com/ollama/ollama/issues/4489
| 2,301,751,450
|
I_kwDOJ0Z1Ps6JMfCa
| 4,489
|
Is there anybody who successfully imported llama-3-8b-web?
|
{
"login": "Bill-XU",
"id": 7666592,
"node_id": "MDQ6VXNlcjc2NjY1OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7666592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bill-XU",
"html_url": "https://github.com/Bill-XU",
"followers_url": "https://api.github.com/users/Bill-XU/followers",
"following_url": "https://api.github.com/users/Bill-XU/following{/other_user}",
"gists_url": "https://api.github.com/users/Bill-XU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bill-XU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bill-XU/subscriptions",
"organizations_url": "https://api.github.com/users/Bill-XU/orgs",
"repos_url": "https://api.github.com/users/Bill-XU/repos",
"events_url": "https://api.github.com/users/Bill-XU/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bill-XU/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-05-17T03:45:50
| 2024-05-17T06:45:28
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I followed instructions described here [https://github.com/ollama/ollama/blob/main/docs/import.md](url).
Converted this model using options "--ctx 8192 --outtype f16 --vocab-type bpe" and quantized the result with option "q4_0". Both ended successfully.
But when using ollama to run the result, I got "Error: llama runner process no longer running: -1".
Is there anybody who successfully imported and run it?
Best regards, Bill
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4489/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/47
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/47/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/47/comments
|
https://api.github.com/repos/ollama/ollama/issues/47/events
|
https://github.com/ollama/ollama/issues/47
| 1,792,161,831
|
I_kwDOJ0Z1Ps5q0jgn
| 47
|
When running the `ollama` should CLI start the server if it's not running
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2023-07-06T20:07:46
| 2023-08-02T14:51:25
| 2023-08-02T14:51:25
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/47/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/47/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7965
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7965/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7965/comments
|
https://api.github.com/repos/ollama/ollama/issues/7965/events
|
https://github.com/ollama/ollama/issues/7965
| 2,722,571,414
|
I_kwDOJ0Z1Ps6iRySW
| 7,965
|
It seems that the new KV cache quantization feature is incorrectly allocating resources.
|
{
"login": "emzaedu",
"id": 152583617,
"node_id": "U_kgDOCRg9wQ",
"avatar_url": "https://avatars.githubusercontent.com/u/152583617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emzaedu",
"html_url": "https://github.com/emzaedu",
"followers_url": "https://api.github.com/users/emzaedu/followers",
"following_url": "https://api.github.com/users/emzaedu/following{/other_user}",
"gists_url": "https://api.github.com/users/emzaedu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emzaedu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emzaedu/subscriptions",
"organizations_url": "https://api.github.com/users/emzaedu/orgs",
"repos_url": "https://api.github.com/users/emzaedu/repos",
"events_url": "https://api.github.com/users/emzaedu/events{/privacy}",
"received_events_url": "https://api.github.com/users/emzaedu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-06T09:53:06
| 2024-12-20T22:19:44
| 2024-12-20T22:19:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
For example (q4_0 kv):
/set parameter num_ctx 88000
Rombos-LLM-V2.6-Qwen-14b-Q4_K_M:latest 81d0d17e9f6a 21 GB 100% GPU 4 minutes from now
However, the actual VRAM usage amounts to 13,880,772K
There is a significant difference between the actual VRAM usage (13.24 GB) and what Ollama reports (21 GB).
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.0
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7965/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7723
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7723/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7723/comments
|
https://api.github.com/repos/ollama/ollama/issues/7723/events
|
https://github.com/ollama/ollama/issues/7723
| 2,667,707,748
|
I_kwDOJ0Z1Ps6fAf1k
| 7,723
|
Can´t use GPU at Ubuntu 22.04 without Docker - permission problems
|
{
"login": "raullopezgn",
"id": 34060689,
"node_id": "MDQ6VXNlcjM0MDYwNjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/34060689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raullopezgn",
"html_url": "https://github.com/raullopezgn",
"followers_url": "https://api.github.com/users/raullopezgn/followers",
"following_url": "https://api.github.com/users/raullopezgn/following{/other_user}",
"gists_url": "https://api.github.com/users/raullopezgn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raullopezgn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raullopezgn/subscriptions",
"organizations_url": "https://api.github.com/users/raullopezgn/orgs",
"repos_url": "https://api.github.com/users/raullopezgn/repos",
"events_url": "https://api.github.com/users/raullopezgn/events{/privacy}",
"received_events_url": "https://api.github.com/users/raullopezgn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
| null |
[] | null | 26
| 2024-11-18T08:49:40
| 2024-12-02T15:31:17
| 2024-12-02T15:31:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi, I have been using Jan.ai but I wanted to try other options.
I can't run Ollama taking advantage of my GPU. I would prefer not to use Docker for security reasons.
Below I provide all the info that you maybe need to help me in order to find a solution. Thank you in advance.
CPU: AMD Ryzen 5 5600
GPU: AMD Sapphire Nitro+ RX 5700 XT
OS: ubuntu 22.04
Podman version: 3.4.4
Ollama version: 0.4.2
1) I installed lastest AMD files using this command:
amdgpu-install -y --opencl=rocr
2) I installed ollama with these commands:
curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
sudo tar -C /usr -xzf ollama-linux-amd64.tgz
curl -L https://ollama.com/download/ollama-linux-amd64-rocm.tgz -o ollama-linux-amd64-rocm.tgz
sudo tar -C /usr -xzf ollama-linux-amd64-rocm.tgz
sudo useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama
sudo usermod -a -G ollama $(whoami)
3) When I execute as a root Ollama and run mistral:7b at the logs it appears these lines (below I attached the whole log as TXT):
level=ERROR source=amd_linux.go:404 msg="amdgpu devices detected but permission problems block access: permissions not set up properly. Either run ollama as root, or add you user account to the render group. open /dev/kfd: permission denied"
4) After I got this message, then I added my user to the "render" group. But I had the same problem.
I think it is pending to change the permission for the KFD file. However, as I'm not expert in Linux commands, I don't know how to change the permission for /dev/kfd and to which user I should give permissions.
ls -l kfd
crw-rw---- 1 root video 237, 0 nov 18 2024 kfd
5) Also I discovered this:
rocminfo
ROCk module version 6.8.5 is loaded
Unable to open /dev/kfd read-write: Permission denied
Failed to get user name to check for video group membership
------------ ollama.service ------------
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=$PATH"
Environment="HSA_OVERRIDE_GFX_VERSION="10.3.0""
[Install]
WantedBy=default.target
------------
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.4.2
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7723/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8628
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8628/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8628/comments
|
https://api.github.com/repos/ollama/ollama/issues/8628/events
|
https://github.com/ollama/ollama/issues/8628
| 2,815,299,219
|
I_kwDOJ0Z1Ps6nzg6T
| 8,628
|
Cannot download Ollama
|
{
"login": "ichiecodes1",
"id": 168488717,
"node_id": "U_kgDOCgrvDQ",
"avatar_url": "https://avatars.githubusercontent.com/u/168488717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ichiecodes1",
"html_url": "https://github.com/ichiecodes1",
"followers_url": "https://api.github.com/users/ichiecodes1/followers",
"following_url": "https://api.github.com/users/ichiecodes1/following{/other_user}",
"gists_url": "https://api.github.com/users/ichiecodes1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ichiecodes1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ichiecodes1/subscriptions",
"organizations_url": "https://api.github.com/users/ichiecodes1/orgs",
"repos_url": "https://api.github.com/users/ichiecodes1/repos",
"events_url": "https://api.github.com/users/ichiecodes1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ichiecodes1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 12
| 2025-01-28T10:59:43
| 2025-01-29T23:59:10
| 2025-01-29T23:59:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
please i really want to download this platform but i cant can it be fixed?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8628/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6228
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6228/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6228/comments
|
https://api.github.com/repos/ollama/ollama/issues/6228/events
|
https://github.com/ollama/ollama/issues/6228
| 2,452,899,411
|
I_kwDOJ0Z1Ps6SNEZT
| 6,228
|
llama_init_from_gpt_params: error: failed to load model 'models\gemma-1.1-7b-it.Q4_K_M.gguf'
|
{
"login": "stephen521",
"id": 33420615,
"node_id": "MDQ6VXNlcjMzNDIwNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/33420615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stephen521",
"html_url": "https://github.com/stephen521",
"followers_url": "https://api.github.com/users/stephen521/followers",
"following_url": "https://api.github.com/users/stephen521/following{/other_user}",
"gists_url": "https://api.github.com/users/stephen521/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stephen521/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stephen521/subscriptions",
"organizations_url": "https://api.github.com/users/stephen521/orgs",
"repos_url": "https://api.github.com/users/stephen521/repos",
"events_url": "https://api.github.com/users/stephen521/events{/privacy}",
"received_events_url": "https://api.github.com/users/stephen521/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-07T08:33:33
| 2024-09-02T23:21:12
| 2024-09-02T23:21:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when I run the code below in Windows machine(Intel Xeon Silver 421R 2.4GHz, 512m, NVIDIA GeForce RTX 3090) and I got the error below:
llama-cli.exe -m models\gemma-1.1-7b-it.Q4_K_M.gguf --prompt "Once upon a time"
Log start
main: build = 0 (unknown)
main: built with cc (GCC) 14.1.0 for i686-w64-mingw32
main: seed = 1723019120
llama_model_load: error loading model: tensor 'blk.2.ffn_down.weight' data is not within the file bounds, model is corrupted or incomplete
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models\gemma-1.1-7b-it.Q4_K_M.gguf'
main: error: unable to load model
What the problem is? Need help!
Thanks a lots.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
3.1
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6228/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7546
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7546/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7546/comments
|
https://api.github.com/repos/ollama/ollama/issues/7546/events
|
https://github.com/ollama/ollama/issues/7546
| 2,640,226,786
|
I_kwDOJ0Z1Ps6dXqni
| 7,546
|
libggml linked to wrong cuda version
|
{
"login": "jsurloppe",
"id": 20650010,
"node_id": "MDQ6VXNlcjIwNjUwMDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/20650010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jsurloppe",
"html_url": "https://github.com/jsurloppe",
"followers_url": "https://api.github.com/users/jsurloppe/followers",
"following_url": "https://api.github.com/users/jsurloppe/following{/other_user}",
"gists_url": "https://api.github.com/users/jsurloppe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jsurloppe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jsurloppe/subscriptions",
"organizations_url": "https://api.github.com/users/jsurloppe/orgs",
"repos_url": "https://api.github.com/users/jsurloppe/repos",
"events_url": "https://api.github.com/users/jsurloppe/events{/privacy}",
"received_events_url": "https://api.github.com/users/jsurloppe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-11-07T08:14:46
| 2024-11-08T09:29:38
| 2024-11-07T17:20:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I just upgraded to ollama 0.4.0 and loading a model fail with the following error:
```
/tmp/ollama2415219728/runners/cuda_v12/ollama_llama_server: error while loading shared libraries: libcublas.so.11: cannot open shared object file: No such file or directory
time=2024-11-07T08:55:32.986+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 127"
```
It seems that the `libggml_cuda_v12.so` from the binary distribution is linked to cuda 11
```
$ ldd libggml_cuda_v12.so
linux-vdso.so.1 (0x00007ffd315f8000)
libcuda.so.1 => /usr/lib64/libcuda.so.1 (0x00007f7bae400000)
libcublas.so.11 => not found
libcudart.so.11.0 => not found
libcublasLt.so.11 => not found
librt.so.1 => /lib64/librt.so.1 (0x00007f7bf9b83000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f7bf9b7c000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f7bf9b77000)
libstdc++.so.6 => /usr/lib/gcc/x86_64-pc-linux-gnu/13/libstdc++.so.6 (0x00007f7bae000000)
libm.so.6 => /lib64/libm.so.6 (0x00007f7bae322000)
libgcc_s.so.1 => /usr/lib/gcc/x86_64-pc-linux-gnu/13/libgcc_s.so.1 (0x00007f7bf9b52000)
libc.so.6 => /lib64/libc.so.6 (0x00007f7bade1d000)
/lib64/ld-linux-x86-64.so.2 (0x00007f7bf9ba9000)
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.0
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7546/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7546/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4382
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4382/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4382/comments
|
https://api.github.com/repos/ollama/ollama/issues/4382/events
|
https://github.com/ollama/ollama/pull/4382
| 2,291,494,082
|
PR_kwDOJ0Z1Ps5vMDLt
| 4,382
|
Allow XDG user directories
|
{
"login": "noahgitsham",
"id": 73707948,
"node_id": "MDQ6VXNlcjczNzA3OTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/73707948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noahgitsham",
"html_url": "https://github.com/noahgitsham",
"followers_url": "https://api.github.com/users/noahgitsham/followers",
"following_url": "https://api.github.com/users/noahgitsham/following{/other_user}",
"gists_url": "https://api.github.com/users/noahgitsham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noahgitsham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noahgitsham/subscriptions",
"organizations_url": "https://api.github.com/users/noahgitsham/orgs",
"repos_url": "https://api.github.com/users/noahgitsham/repos",
"events_url": "https://api.github.com/users/noahgitsham/events{/privacy}",
"received_events_url": "https://api.github.com/users/noahgitsham/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2024-05-12T20:44:25
| 2024-05-30T15:48:04
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4382",
"html_url": "https://github.com/ollama/ollama/pull/4382",
"diff_url": "https://github.com/ollama/ollama/pull/4382.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4382.patch",
"merged_at": null
}
|
Addresses #228. This is my first time writing go, please feel free to correct any bad code. *
This change defaults to using the [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html) for the history file and private key files which are currently generated in the `.ollama` directory.
It will still use `.ollama` if it exists (hence shouldn't break current setups), but allows use of `$XDG_DATA_HOME/ollama` instead.
I am unsure if/where these should be moved to, do they also belong in XDG_DATA_HOME?
- https://github.com/ollama/ollama/blob/4ec7445a6f678b6efc773bb9fa886d7c9b075577/app/store/store_linux.go#L15
- https://github.com/ollama/ollama/blob/4ec7445a6f678b6efc773bb9fa886d7c9b075577/macapp/src/index.ts#L27
I'm also unsure if I need to add/change anything here, I'm assuming my changes don't even affect windows, so no?
- https://github.com/ollama/ollama/blob/4ec7445a6f678b6efc773bb9fa886d7c9b075577/app/ollama.iss#L122
I simply searched the repo for ".ollama" to find stuff to change, please let me know if I missed anything.
I have only tested and designed this to work on Linux, let me know if this doesn't work for macOS.
*A styleguide and/or contributing document would be great #2231
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4382/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4382/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4516
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4516/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4516/comments
|
https://api.github.com/repos/ollama/ollama/issues/4516/events
|
https://github.com/ollama/ollama/issues/4516
| 2,304,203,304
|
I_kwDOJ0Z1Ps6JV1oo
| 4,516
|
Ollama: running Vite in production mode fails
|
{
"login": "ejgutierrez74",
"id": 11474846,
"node_id": "MDQ6VXNlcjExNDc0ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11474846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ejgutierrez74",
"html_url": "https://github.com/ejgutierrez74",
"followers_url": "https://api.github.com/users/ejgutierrez74/followers",
"following_url": "https://api.github.com/users/ejgutierrez74/following{/other_user}",
"gists_url": "https://api.github.com/users/ejgutierrez74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ejgutierrez74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ejgutierrez74/subscriptions",
"organizations_url": "https://api.github.com/users/ejgutierrez74/orgs",
"repos_url": "https://api.github.com/users/ejgutierrez74/repos",
"events_url": "https://api.github.com/users/ejgutierrez74/events{/privacy}",
"received_events_url": "https://api.github.com/users/ejgutierrez74/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706485225,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eh6Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/js",
"name": "js",
"color": "5F50E3",
"default": false,
"description": "relating to the ollama-js client library"
}
] |
open
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-05-18T18:01:16
| 2025-01-09T09:59:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Im developing web app for educational purpose. I use React + Vite. Till now in development mode im not facing great problems.... But now i have tried to run in production mode and got the error below: h
```
vite v5.2.10 building for production...
[plugin:vite:resolve] [plugin vite:resolve] Module "fs" has been externalized for browser compatibility, imported by "/home/eduardo/Descargas/workshop-react/node_modules/ollama/dist/index.mjs". See https://vitejs.dev/guide/troubleshooting.html#module-externalized-for-browser-compatibility for more details.
[plugin:vite:resolve] [plugin vite:resolve] Module "path" has been externalized for browser compatibility, imported by "/home/eduardo/Descargas/workshop-react/node_modules/ollama/dist/index.mjs". See https://vitejs.dev/guide/troubleshooting.html#module-externalized-for-browser-compatibility for more details.
[plugin:vite:resolve] [plugin vite:resolve] Module "crypto" has been externalized for browser compatibility, imported by "/home/eduardo/Descargas/workshop-react/node_modules/ollama/dist/index.mjs". See https://vitejs.dev/guide/troubleshooting.html#module-externalized-for-browser-compatibility for more details.
[plugin:vite:resolve] [plugin vite:resolve] Module "os" has been externalized for browser compatibility, imported by "/home/eduardo/Descargas/workshop-react/node_modules/ollama/dist/index.mjs". See https://vitejs.dev/guide/troubleshooting.html#module-externalized-for-browser-compatibility for more details.
✓ 1052 modules transformed.
x Build failed in 1.49s
error during build:
RollupError: node_modules/ollama/dist/index.mjs (2:13): "promises" is not exported by "__vite-browser-external", imported by "node_modules/ollama/dist/index.mjs".
file: /home/eduardo/Descargas/workshop-react/node_modules/ollama/dist/index.mjs:2:13
1: import { O as Ollama$1, h as head, p as post } from './shared/ollama.14e58652.mjs';
2: import fs, { promises, createReadStream } from 'fs';
^
3: import { resolve, join, dirname } from 'path';
4: import { createHash } from 'crypto';
at getRollupError (file:///home/eduardo/Descargas/workshop-react/node_modules/vite/node_modules/rollup/dist/es/shared/parseAst.js:394:41)
at error (file:///home/eduardo/Descargas/workshop-react/node_modules/vite/node_modules/rollup/dist/es/shared/parseAst.js:390:42)
at Module.error (file:///home/eduardo/Descargas/workshop-react/node_modules/vite/node_modules/rollup/dist/es/shared/node-entry.js:13860:16)
at Module.traceVariable (file:///home/eduardo/Descargas/workshop-react/node_modules/vite/node_modules/rollup/dist/es/shared/node-entry.js:14308:29)
at ModuleScope.findVariable (file:///home/eduardo/Descargas/workshop-react/node_modules/vite/node_modules/rollup/dist/es/shared/node-entry.js:11989:39)
at ChildScope.findVariable (file:///home/eduardo/Descargas/workshop-react/node_modules/vite/node_modules/rollup/dist/es/shared/node-entry.js:7432:38)
at ClassBodyScope.findVariable (file:///home/eduardo/Descargas/workshop-react/node_modules/vite/node_modules/rollup/dist/es/shared/node-entry.js:7432:38)
at ChildScope.findVariable (file:///home/eduardo/Descargas/workshop-react/node_modules/vite/node_modules/rollup/dist/es/shared/node-entry.js:7432:38)
at ChildScope.findVariable (file:///home/eduardo/Descargas/workshop-react/node_modules/vite/node_modules/rollup/dist/es/shared/node-entry.js:7432:38)
at FunctionScope.findVariable (file:///home/eduardo/Descargas/workshop-react/node_modules/vite/node_modules/rollup/dist/es/shared/node-entry.js:7432:38)
```
As for Vite is a problem that should be fixed by ollama ( the library) :https://vitejs.dev/guide/troubleshooting.html#module-externalized-for-browser-compatibility
Just if not clear: npm run dev works, but when i tried npm run build....i got the error above.
Hope this could be fixed soon.
Thanks
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.1.37
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4516/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6340
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6340/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6340/comments
|
https://api.github.com/repos/ollama/ollama/issues/6340/events
|
https://github.com/ollama/ollama/pull/6340
| 2,463,805,804
|
PR_kwDOJ0Z1Ps54QgT9
| 6,340
|
Add new chat app LLMChat.co
|
{
"login": "deep93333",
"id": 100652109,
"node_id": "U_kgDOBf_UTQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100652109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deep93333",
"html_url": "https://github.com/deep93333",
"followers_url": "https://api.github.com/users/deep93333/followers",
"following_url": "https://api.github.com/users/deep93333/following{/other_user}",
"gists_url": "https://api.github.com/users/deep93333/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deep93333/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deep93333/subscriptions",
"organizations_url": "https://api.github.com/users/deep93333/orgs",
"repos_url": "https://api.github.com/users/deep93333/repos",
"events_url": "https://api.github.com/users/deep93333/events{/privacy}",
"received_events_url": "https://api.github.com/users/deep93333/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-08-13T16:55:56
| 2024-09-23T13:40:19
| 2024-09-23T13:40:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6340",
"html_url": "https://github.com/ollama/ollama/pull/6340",
"diff_url": "https://github.com/ollama/ollama/pull/6340.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6340.patch",
"merged_at": null
}
| null |
{
"login": "deep93333",
"id": 100652109,
"node_id": "U_kgDOBf_UTQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100652109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deep93333",
"html_url": "https://github.com/deep93333",
"followers_url": "https://api.github.com/users/deep93333/followers",
"following_url": "https://api.github.com/users/deep93333/following{/other_user}",
"gists_url": "https://api.github.com/users/deep93333/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deep93333/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deep93333/subscriptions",
"organizations_url": "https://api.github.com/users/deep93333/orgs",
"repos_url": "https://api.github.com/users/deep93333/repos",
"events_url": "https://api.github.com/users/deep93333/events{/privacy}",
"received_events_url": "https://api.github.com/users/deep93333/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6340/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1668
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1668/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1668/comments
|
https://api.github.com/repos/ollama/ollama/issues/1668/events
|
https://github.com/ollama/ollama/issues/1668
| 2,053,380,142
|
I_kwDOJ0Z1Ps56ZBgu
| 1,668
|
unexpected EOF Mac OS
|
{
"login": "bhaskoro-muthohar",
"id": 35159954,
"node_id": "MDQ6VXNlcjM1MTU5OTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/35159954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhaskoro-muthohar",
"html_url": "https://github.com/bhaskoro-muthohar",
"followers_url": "https://api.github.com/users/bhaskoro-muthohar/followers",
"following_url": "https://api.github.com/users/bhaskoro-muthohar/following{/other_user}",
"gists_url": "https://api.github.com/users/bhaskoro-muthohar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhaskoro-muthohar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhaskoro-muthohar/subscriptions",
"organizations_url": "https://api.github.com/users/bhaskoro-muthohar/orgs",
"repos_url": "https://api.github.com/users/bhaskoro-muthohar/repos",
"events_url": "https://api.github.com/users/bhaskoro-muthohar/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhaskoro-muthohar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2023-12-22T04:51:49
| 2024-05-18T14:15:29
| 2024-01-08T02:59:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I got error
```
> ollama run dolphin-mixtral:latest
pulling manifest
pulling bdb11b0699e0... 60% ▕██████████████████ ▏ 15 GB/ 26 GB 3.4 MB/s 52m23s
Error: max retries exceeded: unexpected EOF
```
Here is my `.ollama/logs/server.log`
[server.log](https://github.com/jmorganca/ollama/files/13748433/server.log)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1668/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1668/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2414
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2414/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2414/comments
|
https://api.github.com/repos/ollama/ollama/issues/2414/events
|
https://github.com/ollama/ollama/issues/2414
| 2,125,922,000
|
I_kwDOJ0Z1Ps5-tv7Q
| 2,414
|
MB
|
{
"login": "arghunter",
"id": 91099806,
"node_id": "MDQ6VXNlcjkxMDk5ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/91099806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arghunter",
"html_url": "https://github.com/arghunter",
"followers_url": "https://api.github.com/users/arghunter/followers",
"following_url": "https://api.github.com/users/arghunter/following{/other_user}",
"gists_url": "https://api.github.com/users/arghunter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arghunter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arghunter/subscriptions",
"organizations_url": "https://api.github.com/users/arghunter/orgs",
"repos_url": "https://api.github.com/users/arghunter/repos",
"events_url": "https://api.github.com/users/arghunter/events{/privacy}",
"received_events_url": "https://api.github.com/users/arghunter/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-08T19:27:30
| 2024-02-08T19:27:46
| 2024-02-08T19:27:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "arghunter",
"id": 91099806,
"node_id": "MDQ6VXNlcjkxMDk5ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/91099806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arghunter",
"html_url": "https://github.com/arghunter",
"followers_url": "https://api.github.com/users/arghunter/followers",
"following_url": "https://api.github.com/users/arghunter/following{/other_user}",
"gists_url": "https://api.github.com/users/arghunter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arghunter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arghunter/subscriptions",
"organizations_url": "https://api.github.com/users/arghunter/orgs",
"repos_url": "https://api.github.com/users/arghunter/repos",
"events_url": "https://api.github.com/users/arghunter/events{/privacy}",
"received_events_url": "https://api.github.com/users/arghunter/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2414/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8675
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8675/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8675/comments
|
https://api.github.com/repos/ollama/ollama/issues/8675/events
|
https://github.com/ollama/ollama/issues/8675
| 2,819,497,807
|
I_kwDOJ0Z1Ps6oDh9P
| 8,675
|
Download back always to 1%
|
{
"login": "fredroo",
"id": 6863089,
"node_id": "MDQ6VXNlcjY4NjMwODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6863089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fredroo",
"html_url": "https://github.com/fredroo",
"followers_url": "https://api.github.com/users/fredroo/followers",
"following_url": "https://api.github.com/users/fredroo/following{/other_user}",
"gists_url": "https://api.github.com/users/fredroo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fredroo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fredroo/subscriptions",
"organizations_url": "https://api.github.com/users/fredroo/orgs",
"repos_url": "https://api.github.com/users/fredroo/repos",
"events_url": "https://api.github.com/users/fredroo/events{/privacy}",
"received_events_url": "https://api.github.com/users/fredroo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2025-01-29T22:46:12
| 2025-01-30T11:36:13
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Try with CMD, PowerShell, and git bash, but the same error, with different models, like llama3.3 and deepseek-r1:70b
I have space in SSD OS disk C:\ and disk destination T:\Ollama\
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
v0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8675/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8675/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6988
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6988/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6988/comments
|
https://api.github.com/repos/ollama/ollama/issues/6988/events
|
https://github.com/ollama/ollama/pull/6988
| 2,551,498,739
|
PR_kwDOJ0Z1Ps582IzU
| 6,988
|
llama: don't create extraneous directories
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-26T20:43:04
| 2024-09-26T21:05:34
| 2024-09-26T21:05:31
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6988",
"html_url": "https://github.com/ollama/ollama/pull/6988",
"diff_url": "https://github.com/ollama/ollama/pull/6988.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6988.patch",
"merged_at": "2024-09-26T21:05:31"
}
|
With the .WAIT this shouldn't be necessary any more, and was causing payload processing glitches.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6988/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/607
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/607/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/607/comments
|
https://api.github.com/repos/ollama/ollama/issues/607/events
|
https://github.com/ollama/ollama/issues/607
| 1,913,919,962
|
I_kwDOJ0Z1Ps5yFBna
| 607
|
`ollama -v` prints `0.0.0` in the latest docker images
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2023-09-26T16:49:37
| 2023-09-29T18:30:27
| 2023-09-29T18:30:27
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/607/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7103
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7103/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7103/comments
|
https://api.github.com/repos/ollama/ollama/issues/7103/events
|
https://github.com/ollama/ollama/pull/7103
| 2,566,665,258
|
PR_kwDOJ0Z1Ps59oy47
| 7,103
|
llama: cgo ggml
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-04T15:41:35
| 2024-10-08T16:23:30
| 2024-10-08T15:53:59
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7103",
"html_url": "https://github.com/ollama/ollama/pull/7103",
"diff_url": "https://github.com/ollama/ollama/pull/7103.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7103.patch",
"merged_at": null
}
|
Replaced by #7140
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7103/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6409
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6409/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6409/comments
|
https://api.github.com/repos/ollama/ollama/issues/6409/events
|
https://github.com/ollama/ollama/issues/6409
| 2,472,392,863
|
I_kwDOJ0Z1Ps6TXbif
| 6,409
|
End and Home buttons don't work in ollama in tmux
|
{
"login": "yurivict",
"id": 271906,
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yurivict",
"html_url": "https://github.com/yurivict",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yurivict/subscriptions",
"organizations_url": "https://api.github.com/users/yurivict/orgs",
"repos_url": "https://api.github.com/users/yurivict/repos",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"received_events_url": "https://api.github.com/users/yurivict/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-08-19T03:47:24
| 2024-12-02T21:51:45
| 2024-12-02T21:51:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Pressing End doesn't move the cursor to the end of line, and instead it enters the '~' character.
Same happens with the Home button.
These are TERM environment variables in tmux:
```
$ env | grep TERM
COLORTERM=truecolor
TERM_PROGRAM_VERSION=3.3a
TERM=tmux-256color
TERM_PROGRAM=tmux
```
Version: 0.3.6
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6409/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1175
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1175/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1175/comments
|
https://api.github.com/repos/ollama/ollama/issues/1175/events
|
https://github.com/ollama/ollama/pull/1175
| 1,999,475,799
|
PR_kwDOJ0Z1Ps5fxPgf
| 1,175
|
Refactor Request Retry
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-17T16:16:06
| 2023-11-17T19:22:36
| 2023-11-17T19:22:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1175",
"html_url": "https://github.com/ollama/ollama/pull/1175",
"diff_url": "https://github.com/ollama/ollama/pull/1175.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1175.patch",
"merged_at": "2023-11-17T19:22:35"
}
|
The request retry logic is mostly in `download.go` and `upload.go`. This function is only meant to retry on authentication failure, so doing that multiple times is not needed.
- do not log `upload failure` on error, this function is called on download also
- do not log on request cancellation, this causes a cancelled download to log 10+ times due to the chunked downloads
- only retry on auth failure explicitly, and one time
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1175/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1628
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1628/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1628/comments
|
https://api.github.com/repos/ollama/ollama/issues/1628/events
|
https://github.com/ollama/ollama/issues/1628
| 2,050,327,694
|
I_kwDOJ0Z1Ps56NYSO
| 1,628
|
[Feature Request] integrate PowerInfer as alternative to llama.cpp
|
{
"login": "jenningsloy318",
"id": 10169236,
"node_id": "MDQ6VXNlcjEwMTY5MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/10169236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jenningsloy318",
"html_url": "https://github.com/jenningsloy318",
"followers_url": "https://api.github.com/users/jenningsloy318/followers",
"following_url": "https://api.github.com/users/jenningsloy318/following{/other_user}",
"gists_url": "https://api.github.com/users/jenningsloy318/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jenningsloy318/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jenningsloy318/subscriptions",
"organizations_url": "https://api.github.com/users/jenningsloy318/orgs",
"repos_url": "https://api.github.com/users/jenningsloy318/repos",
"events_url": "https://api.github.com/users/jenningsloy318/events{/privacy}",
"received_events_url": "https://api.github.com/users/jenningsloy318/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2023-12-20T11:03:25
| 2024-07-24T05:59:42
| 2024-03-11T18:13:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
just found another inference engine https://github.com/SJTU-IPADS/PowerInfer, it seems has some advantages, but not tested, if ollama can integrate it ?
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1628/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1628/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7722
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7722/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7722/comments
|
https://api.github.com/repos/ollama/ollama/issues/7722/events
|
https://github.com/ollama/ollama/pull/7722
| 2,667,574,687
|
PR_kwDOJ0Z1Ps6CNPY1
| 7,722
|
openai: fix follow-on messages having "role": "assistant"
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-11-18T08:04:33
| 2025-01-06T18:41:46
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7722",
"html_url": "https://github.com/ollama/ollama/pull/7722",
"diff_url": "https://github.com/ollama/ollama/pull/7722.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7722.patch",
"merged_at": null
}
|
Fixes https://github.com/ollama/ollama/issues/7626
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7722/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6236
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6236/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6236/comments
|
https://api.github.com/repos/ollama/ollama/issues/6236/events
|
https://github.com/ollama/ollama/issues/6236
| 2,453,873,359
|
I_kwDOJ0Z1Ps6SQyLP
| 6,236
|
gpu not found in windows
|
{
"login": "showyoung",
"id": 5949457,
"node_id": "MDQ6VXNlcjU5NDk0NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5949457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/showyoung",
"html_url": "https://github.com/showyoung",
"followers_url": "https://api.github.com/users/showyoung/followers",
"following_url": "https://api.github.com/users/showyoung/following{/other_user}",
"gists_url": "https://api.github.com/users/showyoung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/showyoung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/showyoung/subscriptions",
"organizations_url": "https://api.github.com/users/showyoung/orgs",
"repos_url": "https://api.github.com/users/showyoung/repos",
"events_url": "https://api.github.com/users/showyoung/events{/privacy}",
"received_events_url": "https://api.github.com/users/showyoung/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 18
| 2024-08-07T16:23:10
| 2024-09-05T18:46:35
| 2024-09-05T18:46:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
A few days ago, my ollama could still run using the GPU, but today it suddenly can only use the CPU. I tried to reinstall ollama, use an old version of ollama, and updated the graphics card driver, but I couldn't make ollama run on the GPU. windows 11 22H2, graphics card is 3080, cpu is intel.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.4
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6236/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/869
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/869/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/869/comments
|
https://api.github.com/repos/ollama/ollama/issues/869/events
|
https://github.com/ollama/ollama/issues/869
| 1,955,397,440
|
I_kwDOJ0Z1Ps50jP9A
| 869
|
API documentation link in the Homepage is broken
|
{
"login": "kumarana",
"id": 6807325,
"node_id": "MDQ6VXNlcjY4MDczMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6807325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kumarana",
"html_url": "https://github.com/kumarana",
"followers_url": "https://api.github.com/users/kumarana/followers",
"following_url": "https://api.github.com/users/kumarana/following{/other_user}",
"gists_url": "https://api.github.com/users/kumarana/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kumarana/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kumarana/subscriptions",
"organizations_url": "https://api.github.com/users/kumarana/orgs",
"repos_url": "https://api.github.com/users/kumarana/repos",
"events_url": "https://api.github.com/users/kumarana/events{/privacy}",
"received_events_url": "https://api.github.com/users/kumarana/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-21T08:57:29
| 2023-10-22T13:03:29
| 2023-10-21T15:58:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It shd be https://github.com/jmorganca/ollama/blob/main/docs/api.md
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/869/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1860
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1860/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1860/comments
|
https://api.github.com/repos/ollama/ollama/issues/1860/events
|
https://github.com/ollama/ollama/issues/1860
| 2,071,300,606
|
I_kwDOJ0Z1Ps57dYn-
| 1,860
|
[FEATURE] Add "mv" command + add possibly add confirmation for "rm"
|
{
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/jukofyork/followers",
"following_url": "https://api.github.com/users/jukofyork/following{/other_user}",
"gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions",
"organizations_url": "https://api.github.com/users/jukofyork/orgs",
"repos_url": "https://api.github.com/users/jukofyork/repos",
"events_url": "https://api.github.com/users/jukofyork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jukofyork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-01-08T22:07:10
| 2024-03-22T01:16:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be nice to have a "mv" command and could probably just be impliment as a "cp" followed by an "rm".
It might also be a good idea to add confirmation for "rm" as I've accidentally removed a model a couple of times now.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1860/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5679
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5679/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5679/comments
|
https://api.github.com/repos/ollama/ollama/issues/5679/events
|
https://github.com/ollama/ollama/pull/5679
| 2,407,112,636
|
PR_kwDOJ0Z1Ps51TWry
| 5,679
|
Add LLPhant to README.md
|
{
"login": "f-lombardo",
"id": 280709,
"node_id": "MDQ6VXNlcjI4MDcwOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/280709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f-lombardo",
"html_url": "https://github.com/f-lombardo",
"followers_url": "https://api.github.com/users/f-lombardo/followers",
"following_url": "https://api.github.com/users/f-lombardo/following{/other_user}",
"gists_url": "https://api.github.com/users/f-lombardo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f-lombardo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f-lombardo/subscriptions",
"organizations_url": "https://api.github.com/users/f-lombardo/orgs",
"repos_url": "https://api.github.com/users/f-lombardo/repos",
"events_url": "https://api.github.com/users/f-lombardo/events{/privacy}",
"received_events_url": "https://api.github.com/users/f-lombardo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-13T19:30:03
| 2024-11-21T08:54:27
| 2024-11-21T08:54:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5679",
"html_url": "https://github.com/ollama/ollama/pull/5679",
"diff_url": "https://github.com/ollama/ollama/pull/5679.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5679.patch",
"merged_at": "2024-11-21T08:54:26"
}
|
LLPhant is a PHP library that wraps many LLM services and it supports Ollama.
https://github.com/theodo-group/LLPhant?tab=readme-ov-file#ollama
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5679/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.