url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/2187
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2187/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2187/comments
|
https://api.github.com/repos/ollama/ollama/issues/2187/events
|
https://github.com/ollama/ollama/issues/2187
| 2,100,046,699
|
I_kwDOJ0Z1Ps59LCtr
| 2,187
|
Support GPU runners on CPUs without AVX
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g",
"url": "https://api.github.com/repos/ollama/ollama/labels/build",
"name": "build",
"color": "006b75",
"default": false,
"description": "Issues relating to building ollama from source"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 57
| 2024-01-25T10:19:53
| 2025-01-08T14:55:07
| 2024-12-10T17:47:21
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
2024/01/25 10:13:00 gpu.go:137: INFO CUDA Compute Capability detected: 8.6
^Cuser@llm-01:~$ ollama serve
2024/01/25 10:14:17 images.go:815: INFO total blobs: 14
2024/01/25 10:14:17 images.go:822: INFO total unused blobs removed: 0
2024/01/25 10:14:17 routes.go:943: INFO Listening on 127.0.0.1:11434 (version 0.1.21)
2024/01/25 10:14:17 payload_common.go:106: INFO Extracting dynamic libraries...
2024/01/25 10:14:20 payload_common.go:145: INFO Dynamic LLM libraries [cpu cpu_avx cuda_v11 rocm_v6 rocm_v5 cpu_avx2]
2024/01/25 10:14:20 gpu.go:91: INFO Detecting GPU type
2024/01/25 10:14:20 gpu.go:210: INFO Searching for GPU management library libnvidia-ml.so
2024/01/25 10:14:20 gpu.go:256: INFO Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.29.06]
2024/01/25 10:14:20 gpu.go:96: INFO Nvidia GPU detected
2024/01/25 10:14:20 gpu.go:137: INFO CUDA Compute Capability detected: 8.6
[GIN] 2024/01/25 - 10:14:54 | 200 | 249.562µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/01/25 - 10:14:54 | 200 | 938.998µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/01/25 - 10:14:54 | 200 | 201.321µs | 127.0.0.1 | POST "/api/show"
2024/01/25 10:14:54 gpu.go:137: INFO CUDA Compute Capability detected: 8.6
2024/01/25 10:14:54 gpu.go:137: INFO CUDA Compute Capability detected: 8.6
2024/01/25 10:14:54 cpu_common.go:18: INFO CPU does not have vector extensions
loading library /tmp/ollama1758121582/cuda_v11/libext_server.so
SIGILL: illegal instruction
PC=0x7f38ddf4248c m=15 sigcode=2
signal arrived during cgo execution
```
@dhiltgen this will be of interest to you
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2187/reactions",
"total_count": 5,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2187/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8415
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8415/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8415/comments
|
https://api.github.com/repos/ollama/ollama/issues/8415/events
|
https://github.com/ollama/ollama/pull/8415
| 2,786,351,318
|
PR_kwDOJ0Z1Ps6Hqbp-
| 8,415
|
Added Rancher Desktop Open WebUI extension to the Community Integrations list
|
{
"login": "gunamata",
"id": 59538726,
"node_id": "MDQ6VXNlcjU5NTM4NzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/59538726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gunamata",
"html_url": "https://github.com/gunamata",
"followers_url": "https://api.github.com/users/gunamata/followers",
"following_url": "https://api.github.com/users/gunamata/following{/other_user}",
"gists_url": "https://api.github.com/users/gunamata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gunamata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gunamata/subscriptions",
"organizations_url": "https://api.github.com/users/gunamata/orgs",
"repos_url": "https://api.github.com/users/gunamata/repos",
"events_url": "https://api.github.com/users/gunamata/events{/privacy}",
"received_events_url": "https://api.github.com/users/gunamata/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 7
| 2025-01-14T06:15:19
| 2025-01-17T20:38:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8415",
"html_url": "https://github.com/ollama/ollama/pull/8415",
"diff_url": "https://github.com/ollama/ollama/pull/8415.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8415.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8415/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2812
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2812/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2812/comments
|
https://api.github.com/repos/ollama/ollama/issues/2812/events
|
https://github.com/ollama/ollama/issues/2812
| 2,159,298,955
|
I_kwDOJ0Z1Ps6AtEmL
| 2,812
|
Allow integration with Slurm
|
{
"login": "iamashwin99",
"id": 46030335,
"node_id": "MDQ6VXNlcjQ2MDMwMzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/46030335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamashwin99",
"html_url": "https://github.com/iamashwin99",
"followers_url": "https://api.github.com/users/iamashwin99/followers",
"following_url": "https://api.github.com/users/iamashwin99/following{/other_user}",
"gists_url": "https://api.github.com/users/iamashwin99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamashwin99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamashwin99/subscriptions",
"organizations_url": "https://api.github.com/users/iamashwin99/orgs",
"repos_url": "https://api.github.com/users/iamashwin99/repos",
"events_url": "https://api.github.com/users/iamashwin99/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamashwin99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-02-28T16:01:06
| 2024-09-12T01:23:46
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Slurm is a utility to manage and schedule workloads on a cluster of computers.
Many academic institutions use it for distributing computation.
I was wondering if it would be a good idea to implement an interface that allows use of chat interface with the model loaded via Slurm jobs.
This way your request gets queued, and when the computation is done, ollama will pipe the output.
There could be a number of ways to do this :
- Allow single-response subcommands which start the server, run the query and kill the server when the output is received, for eg:
```console
$ ollama singlerun mistral --message "what is the meaning of life"
[answer]
```
- integrate the slurm scheduling within Ollama
- Write a wrapper around ollama that does point 1.
Im hoping to have a discussion on what the community thinks about this topic.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2812/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2812/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6268
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6268/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6268/comments
|
https://api.github.com/repos/ollama/ollama/issues/6268/events
|
https://github.com/ollama/ollama/issues/6268
| 2,456,906,305
|
I_kwDOJ0Z1Ps6ScWpB
| 6,268
|
Cannot get to UI Web page
|
{
"login": "lamachine",
"id": 15357596,
"node_id": "MDQ6VXNlcjE1MzU3NTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/15357596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lamachine",
"html_url": "https://github.com/lamachine",
"followers_url": "https://api.github.com/users/lamachine/followers",
"following_url": "https://api.github.com/users/lamachine/following{/other_user}",
"gists_url": "https://api.github.com/users/lamachine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lamachine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lamachine/subscriptions",
"organizations_url": "https://api.github.com/users/lamachine/orgs",
"repos_url": "https://api.github.com/users/lamachine/repos",
"events_url": "https://api.github.com/users/lamachine/events{/privacy}",
"received_events_url": "https://api.github.com/users/lamachine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-08-09T01:04:34
| 2024-08-09T21:08:07
| 2024-08-09T21:08:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Installed ollama .34 on windows, pointed it to a different location for models. I can use the command line interface, but cannot get to the UI. I checked the server log and found
` time=2024-08-08T17:41:36.431-07:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"`
I uninstalled and reinstalled. I had it working on docker, but wanted to run native if possible.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.4
|
{
"login": "lamachine",
"id": 15357596,
"node_id": "MDQ6VXNlcjE1MzU3NTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/15357596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lamachine",
"html_url": "https://github.com/lamachine",
"followers_url": "https://api.github.com/users/lamachine/followers",
"following_url": "https://api.github.com/users/lamachine/following{/other_user}",
"gists_url": "https://api.github.com/users/lamachine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lamachine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lamachine/subscriptions",
"organizations_url": "https://api.github.com/users/lamachine/orgs",
"repos_url": "https://api.github.com/users/lamachine/repos",
"events_url": "https://api.github.com/users/lamachine/events{/privacy}",
"received_events_url": "https://api.github.com/users/lamachine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6268/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3562
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3562/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3562/comments
|
https://api.github.com/repos/ollama/ollama/issues/3562/events
|
https://github.com/ollama/ollama/issues/3562
| 2,234,176,954
|
I_kwDOJ0Z1Ps6FKtW6
| 3,562
|
I sometimes see [INST0] in the output stream
|
{
"login": "ioquatix",
"id": 30030,
"node_id": "MDQ6VXNlcjMwMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/30030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioquatix",
"html_url": "https://github.com/ioquatix",
"followers_url": "https://api.github.com/users/ioquatix/followers",
"following_url": "https://api.github.com/users/ioquatix/following{/other_user}",
"gists_url": "https://api.github.com/users/ioquatix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioquatix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioquatix/subscriptions",
"organizations_url": "https://api.github.com/users/ioquatix/orgs",
"repos_url": "https://api.github.com/users/ioquatix/repos",
"events_url": "https://api.github.com/users/ioquatix/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioquatix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-04-09T19:38:36
| 2024-04-13T01:07:06
| 2024-04-13T01:07:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Sometimes I see "[INST0]" in the output stream, when using `llama2`.
```
Mrowr! *cocks head to side* Oh, hello there little human! *blinks* You're so... *pounces on the ground* excited! *bats at air* I'm just here for the... *sniffs the air* food. *giggles* Yes, I love food! *purrs* Do you have any? *hovers nearby*[INST0]
```
### What did you expect to see?
I expect to see the output without `[INST0]`.
### Steps to reproduce
This script, when run on Linux, seems to reproduce the problem sometimes: https://github.com/socketry/async-ollama/blob/main/examples/conversation.rb
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
x86
### Platform
_No response_
### Ollama version
0.1.30
### GPU
Nvidia
### GPU info
```
Wed Apr 10 07:38:11 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.67 Driver Version: 550.67 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 On | Off |
| 0% 42C P8 41W / 450W | 1853MiB / 24564MiB | 29% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1734 G /usr/bin/gnome-shell 553MiB |
| 0 N/A N/A 2327 G /usr/bin/Xwayland 684MiB |
| 0 N/A N/A 3005 G ...yOnDemand --variations-seed-version 144MiB |
| 0 N/A N/A 48481 C /usr/bin/ollama 390MiB |
+-----------------------------------------------------------------------------------------+
```
### CPU
AMD
### Other software
_No response_
|
{
"login": "ioquatix",
"id": 30030,
"node_id": "MDQ6VXNlcjMwMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/30030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioquatix",
"html_url": "https://github.com/ioquatix",
"followers_url": "https://api.github.com/users/ioquatix/followers",
"following_url": "https://api.github.com/users/ioquatix/following{/other_user}",
"gists_url": "https://api.github.com/users/ioquatix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioquatix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioquatix/subscriptions",
"organizations_url": "https://api.github.com/users/ioquatix/orgs",
"repos_url": "https://api.github.com/users/ioquatix/repos",
"events_url": "https://api.github.com/users/ioquatix/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioquatix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3562/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4420
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4420/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4420/comments
|
https://api.github.com/repos/ollama/ollama/issues/4420/events
|
https://github.com/ollama/ollama/pull/4420
| 2,294,398,756
|
PR_kwDOJ0Z1Ps5vWDG4
| 4,420
|
use nvml for gpu info
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-14T04:51:19
| 2024-05-15T22:47:32
| 2024-05-14T16:31:17
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4420",
"html_url": "https://github.com/ollama/ollama/pull/4420",
"diff_url": "https://github.com/ollama/ollama/pull/4420.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4420.patch",
"merged_at": null
}
|
Also decreases period between recovery lookups since memory is freed quickly on graphics cards like the 4090
TODO:
- [ ] Test on Windows 10
- [ ] Test on other Nvidia GPUs
- [ ] Test on Linux
- [ ] Test on WSL2
- [ ] Make sure `nvml.dll` (and equivalent .so) lookup paths are correct (it seems that it will always be in `C:\Windows\System32` on Windows)
- [ ] rocsmi for AMD?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4420/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1918
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1918/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1918/comments
|
https://api.github.com/repos/ollama/ollama/issues/1918/events
|
https://github.com/ollama/ollama/pull/1918
| 2,075,887,504
|
PR_kwDOJ0Z1Ps5jxBcg
| 1,918
|
Update README.md - community integration - Copilot plugin for Obsidian
|
{
"login": "logancyang",
"id": 4860545,
"node_id": "MDQ6VXNlcjQ4NjA1NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4860545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/logancyang",
"html_url": "https://github.com/logancyang",
"followers_url": "https://api.github.com/users/logancyang/followers",
"following_url": "https://api.github.com/users/logancyang/following{/other_user}",
"gists_url": "https://api.github.com/users/logancyang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/logancyang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/logancyang/subscriptions",
"organizations_url": "https://api.github.com/users/logancyang/orgs",
"repos_url": "https://api.github.com/users/logancyang/repos",
"events_url": "https://api.github.com/users/logancyang/events{/privacy}",
"received_events_url": "https://api.github.com/users/logancyang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-11T06:45:47
| 2024-02-22T19:17:21
| 2024-02-22T19:17:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1918",
"html_url": "https://github.com/ollama/ollama/pull/1918",
"diff_url": "https://github.com/ollama/ollama/pull/1918.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1918.patch",
"merged_at": "2024-02-22T19:17:20"
}
|
Just released Copilot for Obsidian v2.4.8 with Ollama local model integration. Thank you all for your awesome work on Ollama!
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1918/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/694
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/694/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/694/comments
|
https://api.github.com/repos/ollama/ollama/issues/694/events
|
https://github.com/ollama/ollama/issues/694
| 1,925,381,287
|
I_kwDOJ0Z1Ps5ywvyn
| 694
|
Stopwords ignored in API request
|
{
"login": "65a",
"id": 10104049,
"node_id": "MDQ6VXNlcjEwMTA0MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/10104049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/65a",
"html_url": "https://github.com/65a",
"followers_url": "https://api.github.com/users/65a/followers",
"following_url": "https://api.github.com/users/65a/following{/other_user}",
"gists_url": "https://api.github.com/users/65a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/65a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/65a/subscriptions",
"organizations_url": "https://api.github.com/users/65a/orgs",
"repos_url": "https://api.github.com/users/65a/repos",
"events_url": "https://api.github.com/users/65a/events{/privacy}",
"received_events_url": "https://api.github.com/users/65a/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2023-10-04T04:26:56
| 2023-10-05T01:56:52
| 2023-10-05T01:56:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Steps to reproduce:
Create a model with no template, f16 gguf.
Use github.com/jmorganca/olllama/api Go client.
Set stopwords to various strings that might be emitted, such as other users in the "chat"
Call Generate() with an alpaca-style prompt for the next reply.
Model responds and happily emits the stop words.
The stop words make it at least as far as the request out to server.cpp, so either it doesn't understand the way they are specified, or they are lost between ollama and the runner in http-land. Even with prompt problems, I would expect generation to terminate at the first stopword.
Should I be including TEMPLATE, but just taking the raw input?
|
{
"login": "65a",
"id": 10104049,
"node_id": "MDQ6VXNlcjEwMTA0MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/10104049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/65a",
"html_url": "https://github.com/65a",
"followers_url": "https://api.github.com/users/65a/followers",
"following_url": "https://api.github.com/users/65a/following{/other_user}",
"gists_url": "https://api.github.com/users/65a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/65a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/65a/subscriptions",
"organizations_url": "https://api.github.com/users/65a/orgs",
"repos_url": "https://api.github.com/users/65a/repos",
"events_url": "https://api.github.com/users/65a/events{/privacy}",
"received_events_url": "https://api.github.com/users/65a/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/694/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4793
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4793/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4793/comments
|
https://api.github.com/repos/ollama/ollama/issues/4793/events
|
https://github.com/ollama/ollama/issues/4793
| 2,330,482,541
|
I_kwDOJ0Z1Ps6K6Fdt
| 4,793
|
Error: llama runner process has terminated: exit status 0xc000001d
|
{
"login": "Ecthellin203",
"id": 94040890,
"node_id": "U_kgDOBZrzOg",
"avatar_url": "https://avatars.githubusercontent.com/u/94040890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ecthellin203",
"html_url": "https://github.com/Ecthellin203",
"followers_url": "https://api.github.com/users/Ecthellin203/followers",
"following_url": "https://api.github.com/users/Ecthellin203/following{/other_user}",
"gists_url": "https://api.github.com/users/Ecthellin203/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ecthellin203/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ecthellin203/subscriptions",
"organizations_url": "https://api.github.com/users/Ecthellin203/orgs",
"repos_url": "https://api.github.com/users/Ecthellin203/repos",
"events_url": "https://api.github.com/users/Ecthellin203/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ecthellin203/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-06-03T08:17:09
| 2024-06-03T08:22:29
| 2024-06-03T08:22:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
run hhao/openbmb-minicpm-llama3-v-2_5:fp16
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "Ecthellin203",
"id": 94040890,
"node_id": "U_kgDOBZrzOg",
"avatar_url": "https://avatars.githubusercontent.com/u/94040890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ecthellin203",
"html_url": "https://github.com/Ecthellin203",
"followers_url": "https://api.github.com/users/Ecthellin203/followers",
"following_url": "https://api.github.com/users/Ecthellin203/following{/other_user}",
"gists_url": "https://api.github.com/users/Ecthellin203/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ecthellin203/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ecthellin203/subscriptions",
"organizations_url": "https://api.github.com/users/Ecthellin203/orgs",
"repos_url": "https://api.github.com/users/Ecthellin203/repos",
"events_url": "https://api.github.com/users/Ecthellin203/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ecthellin203/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4793/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/639
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/639/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/639/comments
|
https://api.github.com/repos/ollama/ollama/issues/639/events
|
https://github.com/ollama/ollama/pull/639
| 1,918,268,957
|
PR_kwDOJ0Z1Ps5bfAbq
| 639
|
optional parameter to not stream response
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-09-28T21:02:35
| 2023-10-13T02:05:18
| 2023-10-11T16:54:27
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/639",
"html_url": "https://github.com/ollama/ollama/pull/639",
"diff_url": "https://github.com/ollama/ollama/pull/639.diff",
"patch_url": "https://github.com/ollama/ollama/pull/639.patch",
"merged_at": "2023-10-11T16:54:27"
}
|
Add an optional `stream` parameter to the generate endpoint (and other endpoints that stream a response) to return the full response in one JSON body, rather than streaming:
```
curl -X POST -H "Content-Type: application/json" -d '{
"model": "llama2",
"prompt": "why is the sky blue?",
"stream": false
}' 'localhost:11434/api/generate'
```
When `stream` is not specified it defaults to true.
resolves https://github.com/jmorganca/ollama/issues/281
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/639/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/639/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6231
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6231/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6231/comments
|
https://api.github.com/repos/ollama/ollama/issues/6231/events
|
https://github.com/ollama/ollama/issues/6231
| 2,453,488,016
|
I_kwDOJ0Z1Ps6SPUGQ
| 6,231
|
Why until now the Qwen2ForCausalLM is not supported until now
|
{
"login": "wisamidris7",
"id": 104096256,
"node_id": "U_kgDOBjRiAA",
"avatar_url": "https://avatars.githubusercontent.com/u/104096256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wisamidris7",
"html_url": "https://github.com/wisamidris7",
"followers_url": "https://api.github.com/users/wisamidris7/followers",
"following_url": "https://api.github.com/users/wisamidris7/following{/other_user}",
"gists_url": "https://api.github.com/users/wisamidris7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wisamidris7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wisamidris7/subscriptions",
"organizations_url": "https://api.github.com/users/wisamidris7/orgs",
"repos_url": "https://api.github.com/users/wisamidris7/repos",
"events_url": "https://api.github.com/users/wisamidris7/events{/privacy}",
"received_events_url": "https://api.github.com/users/wisamidris7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 19
| 2024-08-07T13:27:36
| 2024-12-17T08:16:30
| 2024-08-07T15:51:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I made ollama create it says
```
> ollama create myai -f ./Modelfile
transferring model data
converting model
Error: unsupported architecture
```
and this
```
> docker run --rm -v .:/model ollama/quantize -q q4_0 /model
unknown architecture Qwen2ForCausalLM
```
### OS
Windows
### GPU
_No response_
### CPU
_No response_
### Ollama version
ollama version is 0.3.3
|
{
"login": "wisamidris7",
"id": 104096256,
"node_id": "U_kgDOBjRiAA",
"avatar_url": "https://avatars.githubusercontent.com/u/104096256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wisamidris7",
"html_url": "https://github.com/wisamidris7",
"followers_url": "https://api.github.com/users/wisamidris7/followers",
"following_url": "https://api.github.com/users/wisamidris7/following{/other_user}",
"gists_url": "https://api.github.com/users/wisamidris7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wisamidris7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wisamidris7/subscriptions",
"organizations_url": "https://api.github.com/users/wisamidris7/orgs",
"repos_url": "https://api.github.com/users/wisamidris7/repos",
"events_url": "https://api.github.com/users/wisamidris7/events{/privacy}",
"received_events_url": "https://api.github.com/users/wisamidris7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6231/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3750
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3750/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3750/comments
|
https://api.github.com/repos/ollama/ollama/issues/3750/events
|
https://github.com/ollama/ollama/pull/3750
| 2,252,376,820
|
PR_kwDOJ0Z1Ps5tI-Wr
| 3,750
|
Organize community integrations
|
{
"login": "agi-dude",
"id": 102142660,
"node_id": "U_kgDOBhaSxA",
"avatar_url": "https://avatars.githubusercontent.com/u/102142660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agi-dude",
"html_url": "https://github.com/agi-dude",
"followers_url": "https://api.github.com/users/agi-dude/followers",
"following_url": "https://api.github.com/users/agi-dude/following{/other_user}",
"gists_url": "https://api.github.com/users/agi-dude/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agi-dude/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agi-dude/subscriptions",
"organizations_url": "https://api.github.com/users/agi-dude/orgs",
"repos_url": "https://api.github.com/users/agi-dude/repos",
"events_url": "https://api.github.com/users/agi-dude/events{/privacy}",
"received_events_url": "https://api.github.com/users/agi-dude/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-19T08:14:30
| 2024-11-21T10:37:38
| 2024-11-21T10:37:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3750",
"html_url": "https://github.com/ollama/ollama/pull/3750",
"diff_url": "https://github.com/ollama/ollama/pull/3750.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3750.patch",
"merged_at": null
}
|
The original list was confusing, so I separated web apps from desktop apps, and further categorized desktop apps into a table. Making it easier to find exactly what you need.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3750/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7415
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7415/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7415/comments
|
https://api.github.com/repos/ollama/ollama/issues/7415/events
|
https://github.com/ollama/ollama/issues/7415
| 2,622,986,885
|
I_kwDOJ0Z1Ps6cV5qF
| 7,415
|
When will function calling stream response be supported
|
{
"login": "yiniesta",
"id": 6304413,
"node_id": "MDQ6VXNlcjYzMDQ0MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6304413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yiniesta",
"html_url": "https://github.com/yiniesta",
"followers_url": "https://api.github.com/users/yiniesta/followers",
"following_url": "https://api.github.com/users/yiniesta/following{/other_user}",
"gists_url": "https://api.github.com/users/yiniesta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yiniesta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yiniesta/subscriptions",
"organizations_url": "https://api.github.com/users/yiniesta/orgs",
"repos_url": "https://api.github.com/users/yiniesta/repos",
"events_url": "https://api.github.com/users/yiniesta/events{/privacy}",
"received_events_url": "https://api.github.com/users/yiniesta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-10-30T05:48:17
| 2024-10-31T10:38:59
| 2024-10-31T10:38:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As the title suggests, do you have a specific planned time? This will be a very useful feature
|
{
"login": "yiniesta",
"id": 6304413,
"node_id": "MDQ6VXNlcjYzMDQ0MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6304413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yiniesta",
"html_url": "https://github.com/yiniesta",
"followers_url": "https://api.github.com/users/yiniesta/followers",
"following_url": "https://api.github.com/users/yiniesta/following{/other_user}",
"gists_url": "https://api.github.com/users/yiniesta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yiniesta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yiniesta/subscriptions",
"organizations_url": "https://api.github.com/users/yiniesta/orgs",
"repos_url": "https://api.github.com/users/yiniesta/repos",
"events_url": "https://api.github.com/users/yiniesta/events{/privacy}",
"received_events_url": "https://api.github.com/users/yiniesta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7415/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5574
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5574/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5574/comments
|
https://api.github.com/repos/ollama/ollama/issues/5574/events
|
https://github.com/ollama/ollama/issues/5574
| 2,398,429,102
|
I_kwDOJ0Z1Ps6O9R-u
| 5,574
|
Changing ollama service port will break ollama commandline
|
{
"login": "raymond-infinitecode",
"id": 4714784,
"node_id": "MDQ6VXNlcjQ3MTQ3ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4714784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raymond-infinitecode",
"html_url": "https://github.com/raymond-infinitecode",
"followers_url": "https://api.github.com/users/raymond-infinitecode/followers",
"following_url": "https://api.github.com/users/raymond-infinitecode/following{/other_user}",
"gists_url": "https://api.github.com/users/raymond-infinitecode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raymond-infinitecode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raymond-infinitecode/subscriptions",
"organizations_url": "https://api.github.com/users/raymond-infinitecode/orgs",
"repos_url": "https://api.github.com/users/raymond-infinitecode/repos",
"events_url": "https://api.github.com/users/raymond-infinitecode/events{/privacy}",
"received_events_url": "https://api.github.com/users/raymond-infinitecode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-07-09T14:50:47
| 2024-07-30T15:07:48
| 2024-07-09T16:16:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Step to reproduce:
systemctl edit ollama.service
[Service]
Environment="OLLAMA_HOST=0.0.0.0:8080"
Ctrl + X save, exit nano editor
ollama list
Error: could not connect to ollama app. is it running ?
After changing the port back to 0.0.0.0:11434, it works.
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.1.41
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5574/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3819
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3819/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3819/comments
|
https://api.github.com/repos/ollama/ollama/issues/3819/events
|
https://github.com/ollama/ollama/issues/3819
| 2,256,386,665
|
I_kwDOJ0Z1Ps6Gfbpp
| 3,819
|
llama3:70b generating gibberish
|
{
"login": "holytony",
"id": 7436713,
"node_id": "MDQ6VXNlcjc0MzY3MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7436713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/holytony",
"html_url": "https://github.com/holytony",
"followers_url": "https://api.github.com/users/holytony/followers",
"following_url": "https://api.github.com/users/holytony/following{/other_user}",
"gists_url": "https://api.github.com/users/holytony/gists{/gist_id}",
"starred_url": "https://api.github.com/users/holytony/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/holytony/subscriptions",
"organizations_url": "https://api.github.com/users/holytony/orgs",
"repos_url": "https://api.github.com/users/holytony/repos",
"events_url": "https://api.github.com/users/holytony/events{/privacy}",
"received_events_url": "https://api.github.com/users/holytony/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-04-22T12:16:07
| 2024-05-10T00:42:39
| 2024-05-10T00:42:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
> #normal response......
>#then I got this
>from recognizing the shape of a Totem to to to, to\\\\ to\\\\\\\\\\\\.\\\\,\\\\\\\\,
> of\\\\\\\\\\\\\\\\\\.\\\\.\\\\ to\\\\ to\\\\\\\\\\\\ to\\\\\\\\. -,\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\.\\\\. to
> and\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\and\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to a
> to\\\\,\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
> and\\\\ to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.
> a\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
> to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
> and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\,
> to\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\ and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ a\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\,\\\\\\\\
> to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
> and\\\\\\\\\\\\\\\\\\ to\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
> to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\\\\\\\\\\\ and\\\\\\\\ \\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \ \\ \\\\\\\\ \\\\\\\\\\\\\\. and\\\\\\\\\\\\\\\\
> to\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\ a to\\\\\\\\\\\\\\\\\\\\\\\\ to,\\\\\\\\\\\\\\\\\\\\\\\\\\\\., a\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\
> \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ the to\\\\\\\\\\\\\\ of\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to,,.\\\\\\\\\\ a
> \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\ and\\\\\\\\\\\\\\\\\\\\,\\\\ to\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\
> and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\, a the\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
> \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ a\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
> is\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\ in\\\\\\\\\\ a\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\\\\\\\ to\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\.\\\\ of\\\\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\
> and\\\\\\ the\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ a\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\ to\\\\, in\\\\\\\\
> \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
> \\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to to\\\\.\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\
> and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ is in\\\\\\\\\\\\\\\\\\\\\\ \\\\\\ the\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
> the\\\\,\\\\\\\\\\\\\\\\\\\\\\\\ a a\\\\ the others\\\\\\\\ the more. the\\\\ the a a the which\\\\\\\\\\\\\\\\\\\\\\\\\\\\ the a the to\\\\\\\\. the a\\\\. with other the a, in\\\\\\\\. that,\\\\,,,
> the a a\\\\ the,, to\\\\\\\\\\\\. the the\\\\, a in\\\\\\\\\\\\\\\\ which a in which of the the\\\\\\\\ a as\\\\ \\ the but\\\\\\\\\\\\\\\\ the in to\\\\\\\\, in\\\\.\\\\ a to,, the\\\\ a
> other\\\\\\\\\\\\ a a the to to\\\\\\\\\\\\\\\\\\\\ the a to and the. to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ which the and of.
> .\\\\\\\\\\\\. a\\, that\\\\..
> to it\\\\\\\\\\\\\\\\\\\\. or of. in a\\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\,,.
> the a in\\\\ \\ a\\\\\\\\\\\\\\\\\\\\\\\\ the and in\\\\\\ with, a\\\\\\\\\\\\ of and a a the with\\\\ the other\\\\ the a.,\\\\\\\\\\\\ a the is, to\\\\\\\\\\\\, the\\\\.
> . \ a with\\\\\\\\\\\\\\\\\\\\\\ a, to,, to a\\\\\\\\ in to.
> is.\\\\\\\\\\\\\\ the to of\\\\. a\\\\ the a\\ to\\\\\\\\\\\\\\, a\\\\\\\\. the the the a\\\\\\\\\\,, to the\\\\ to \ \\\\\\, in a\\\\\\\\\\\\\\\\.. a,.\\\\\\\\\\\\ which. the the. a and., the of
> the\\\\ and\\\\\\\\ the a\\\\\\\\\\\\\\\\\\\\ and an\\\\,\\\\\\\\. the at \\, the that the which,\\\\\\,\\\\\\\\\\\\ \\\\\\ a\\\\\\\\ and the in a\\ \\ a a\\\\. to\\\\\\\\ \\, for in\\\\\\\\\\\\ a
> a\\\\\\\\\\\\ with\\\\\\\\\\\\\\\\ a the. or\\\\\\\\\\\\\\\\. the it and the\\\\\\ the, we\\\\\\\\\\\\ a,\\\\. of \,\\\\\\\\. but.\\\\\\\\\\\\ that a\\\\\\ the,, to of to the a to a\\\\ that the
> and,. the\\ to a, a\\\\,,,\\\\\\\\ the for the the the of a, his\\\\\\\\ the a. to\\\\\\\\\\\\\\\\,\\\\\\\\ and ., a\\\\.
> a of\\\\ \\. in the\\\\. which. a\\\\\\\\ the a\\\\\\\\\\\\\\\\\\\\ a that the that. this in\\\\\\\\ \\ the as\\. with\\\\\\\\ the it\\\\., to \\ the in\\\\ to.\\\\\\\\ to the the\\\\\\\\ the\\\\
> and of the. the\\\\ that \\\\\\\\\\, the a. and the. a. the to. the.
> the the which\\\\\\\\ for\\\\ \\\\\\\\\\\\\ this the the, the, the,\\\\ of the of. the\\\\. of the the, the. which\\\\. the a. a\\\\. the of in the. the that to the which in\\. but\\\\ of and the
> \\ the the, of\\\\.,. a. a to the\\ are or\\\\\\\\ have. of\\\\. a. a. an. of. the. the. the,\\\\\\\\ the to\\\\\\ the the the,\\\\ and, the in the of, the of it. that., at\\\\ the of the \\ that \\
> the the to the,\\\\. some and to the with to in\\ which in he. some to. a. the\\\\; the of the,\\\\ they. the, which\\\\ \\\ the a to the the the of a and the the in. a. an of and the \. the it, and
> that the, in the a. the of. the the\\\\. the the \\ of a of in the\\\\. a\\\\. a, the and the in. the the to the. the the the the, the. the of of the. the the the the in, the of of of .
> \\ the a that the and of the, that of. the. the the they the. the, a for. which of. the the. a. a. some to the\\\\ that the \\\\\\\\\\\\. a; of the with of the.
> the the and the of the to the, it from in the in the, the\\\\\\\\ of. the of. the of the, this in and the the the. the the of\\\\; an., the to. to of. of is to the of it to\\ and the of \\. a of.
> of, some. the of., a. to.. we which have.
> , the in and,, the of.,. that of the. to this. are, it.
> that of\\\\ and. they of the.
> with a of \ the of and the and a and the the the that a\\\\. the a and a, for the of the as well as the of the as.
> the his and of the or and the other's, . the the other's to the of the as of the other's of the other's to the other. ( , of them.
>
> Auch of the other's of the other's of the other's of the other's of the other. of the other.
> .
>
> a few of the other's of the other
> , (auch of the other's of the other's of the other'. Of the other's of the other.
> auch of the other's of the other's of the other. of the other's of the other.
>
> of the other.
>
> auch of the other.of the other.of the other.
> auch of the other.of the other.
> auch of the other.of the other.
> auch of the other.
>
> auch of the other's of the other.
> auchof the other.of the other.
> auch of the other'.
>
> of the other.
> auch of the other.of the other
> a of the other.
> auch of the other.of of the other.
> auch of the other.
> auch of the other.
>
> auch of the other's of the other.
> auch of the other.
> auch of the other.
> auch of the other.
>
> auch of the other.
> auch of the other.
> auch of the other.of of the other;
> of of the other.
> auch of the other.
> auch of the other.
> auch of the other.
> auch of the other
>
> >>> are you alright
> I'm just a language model, I don't have feelings or emotions like humans do. However, I can understand and respond to expressions of concern.
>
> To answer your question, "Are you alright?" is a polite way of asking if someone is okay or if they need any help. It's often used as a greeting or as a response to someone who might be going through
> a tough time.
>
> If you're asking about me, the AI assistant, I'm functioning properly and ready to assist with any questions or tasks you may have!
here is what I got on the ollama serve side:
> loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so
> time=2024-04-22T19:26:57.679+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so"
> time=2024-04-22T19:26:57.679+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
> llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest))
> llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
> llama_model_loader: - kv 0: general.architecture str = llama
> llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
> llama_model_loader: - kv 2: llama.block_count u32 = 80
> llama_model_loader: - kv 3: llama.context_length u32 = 8192
> llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
> llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
> llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
> llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
> llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
> llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
> llama_model_loader: - kv 10: general.file_type u32 = 2
> llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
> llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
> llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
> llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
> llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
> llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
> llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
> llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
> llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
> llama_model_loader: - kv 20: general.quantization_version u32 = 2
> llama_model_loader: - type f32: 161 tensors
> llama_model_loader: - type q4_0: 561 tensors
> llama_model_loader: - type q6_K: 1 tensors
> llm_load_vocab: special tokens definition check successful ( 256/128256 ).
> llm_load_print_meta: format = GGUF V3 (latest)
> llm_load_print_meta: arch = llama
> llm_load_print_meta: vocab type = BPE
> llm_load_print_meta: n_vocab = 128256
> llm_load_print_meta: n_merges = 280147
> llm_load_print_meta: n_ctx_train = 8192
> llm_load_print_meta: n_embd = 8192
> llm_load_print_meta: n_head = 64
> llm_load_print_meta: n_head_kv = 8
> llm_load_print_meta: n_layer = 80
> llm_load_print_meta: n_rot = 128
> llm_load_print_meta: n_embd_head_k = 128
> llm_load_print_meta: n_embd_head_v = 128
> llm_load_print_meta: n_gqa = 8
> llm_load_print_meta: n_embd_k_gqa = 1024
> llm_load_print_meta: n_embd_v_gqa = 1024
> llm_load_print_meta: f_norm_eps = 0.0e+00
> llm_load_print_meta: f_norm_rms_eps = 1.0e-05
> llm_load_print_meta: f_clamp_kqv = 0.0e+00
> llm_load_print_meta: f_max_alibi_bias = 0.0e+00
> llm_load_print_meta: f_logit_scale = 0.0e+00
> llm_load_print_meta: n_ff = 28672
> llm_load_print_meta: n_expert = 0
> llm_load_print_meta: n_expert_used = 0
> llm_load_print_meta: causal attn = 1
> llm_load_print_meta: pooling type = 0
> llm_load_print_meta: rope type = 0
> llm_load_print_meta: rope scaling = linear
> llm_load_print_meta: freq_base_train = 500000.0
> llm_load_print_meta: freq_scale_train = 1
> llm_load_print_meta: n_yarn_orig_ctx = 8192
> llm_load_print_meta: rope_finetuned = unknown
> llm_load_print_meta: ssm_d_conv = 0
> llm_load_print_meta: ssm_d_inner = 0
> llm_load_print_meta: ssm_d_state = 0
> llm_load_print_meta: ssm_dt_rank = 0
> llm_load_print_meta: model type = 70B
> llm_load_print_meta: model ftype = Q4_0
> llm_load_print_meta: model params = 70.55 B
> llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
> llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
> llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
> llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
> llm_load_print_meta: LF token = 128 'Ä'
> llm_load_tensors: ggml ctx size = 0.55 MiB
> llm_load_tensors: offloading 80 repeating layers to GPU
> llm_load_tensors: offloading non-repeating layers to GPU
> llm_load_tensors: offloaded 81/81 layers to GPU
> llm_load_tensors: CPU buffer size = 563.62 MiB
> llm_load_tensors: CUDA0 buffer size = 37546.98 MiB
> ...................................................................................................
> llama_new_context_with_model: n_ctx = 2048
> llama_new_context_with_model: n_batch = 512
> llama_new_context_with_model: n_ubatch = 512
> llama_new_context_with_model: freq_base = 500000.0
> llama_new_context_with_model: freq_scale = 1
> llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB
> llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
> llama_new_context_with_model: CUDA_Host output buffer size = 266.50 MiB
> llama_new_context_with_model: CUDA0 compute buffer size = 324.00 MiB
> llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB
> llama_new_context_with_model: graph nodes = 2644
> llama_new_context_with_model: graph splits = 2
> {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140379110233664","timestamp":1713785225}
> {"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140379110233664","timestamp":1713785225}
> time=2024-04-22T19:27:05.579+08:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
> {"function":"update_slots","level":"INFO","line":1574,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140373568767552","timestamp":1713785225}
> {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785225}
> {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":233,"slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785225}
> {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785225}
> {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 662.32 ms / 233 tokens ( 2.84 ms per token, 351.79 tokens per second)","n_prompt_tokens_processed":233,"n_tokens_second":351.793163737825,"slot_id":0,"t_prompt_processing":662.321,"t_token":2.842579399141631,"task_id":0,"tid":"140373568767552","timestamp":1713785267}
> {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 41013.01 ms / 710 runs ( 57.76 ms per token, 17.31 tokens per second)","n_decoded":710,"n_tokens_second":17.311579488762725,"slot_id":0,"t_token":57.76480422535211,"t_token_generation":41013.011,"task_id":0,"tid":"140373568767552","timestamp":1713785267}
> {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 41675.33 ms","slot_id":0,"t_prompt_processing":662.321,"t_token_generation":41013.011,"t_total":41675.332,"task_id":0,"tid":"140373568767552","timestamp":1713785267}
> {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":943,"n_ctx":2048,"n_past":942,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785267,"truncated":false}
> [GIN] 2024/04/22 - 19:27:47 | 200 | 50.364082196s | 127.0.0.1 | POST "/api/chat"
> [GIN] 2024/04/22 - 19:43:45 | 200 | 43.912µs | 127.0.0.1 | HEAD "/"
> [GIN] 2024/04/22 - 19:43:45 | 200 | 497.601µs | 127.0.0.1 | POST "/api/show"
> [GIN] 2024/04/22 - 19:43:45 | 200 | 432.359µs | 127.0.0.1 | POST "/api/show"
> time=2024-04-22T19:43:46.778+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
> time=2024-04-22T19:43:46.778+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
> time=2024-04-22T19:43:46.778+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
> time=2024-04-22T19:43:46.778+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
> time=2024-04-22T19:43:46.778+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
> loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so
> time=2024-04-22T19:43:46.778+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so"
> time=2024-04-22T19:43:46.778+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
> llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest))
> llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
> llama_model_loader: - kv 0: general.architecture str = llama
> llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
> llama_model_loader: - kv 2: llama.block_count u32 = 80
> llama_model_loader: - kv 3: llama.context_length u32 = 8192
> llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
> llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
> llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
> llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
> llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
> llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
> llama_model_loader: - kv 10: general.file_type u32 = 2
> llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
> llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
> llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
> llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
> llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
> llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
> llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
> llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
> llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
> llama_model_loader: - kv 20: general.quantization_version u32 = 2
> llama_model_loader: - type f32: 161 tensors
> llama_model_loader: - type q4_0: 561 tensors
> llama_model_loader: - type q6_K: 1 tensors
> llm_load_vocab: special tokens definition check successful ( 256/128256 ).
> llm_load_print_meta: format = GGUF V3 (latest)
> llm_load_print_meta: arch = llama
> llm_load_print_meta: vocab type = BPE
> llm_load_print_meta: n_vocab = 128256
> llm_load_print_meta: n_merges = 280147
> llm_load_print_meta: n_ctx_train = 8192
> llm_load_print_meta: n_embd = 8192
> llm_load_print_meta: n_head = 64
> llm_load_print_meta: n_head_kv = 8
> llm_load_print_meta: n_layer = 80
> llm_load_print_meta: n_rot = 128
> llm_load_print_meta: n_embd_head_k = 128
> llm_load_print_meta: n_embd_head_v = 128
> llm_load_print_meta: n_gqa = 8
> llm_load_print_meta: n_embd_k_gqa = 1024
> llm_load_print_meta: n_embd_v_gqa = 1024
> llm_load_print_meta: f_norm_eps = 0.0e+00
> llm_load_print_meta: f_norm_rms_eps = 1.0e-05
> llm_load_print_meta: f_clamp_kqv = 0.0e+00
> llm_load_print_meta: f_max_alibi_bias = 0.0e+00
> llm_load_print_meta: f_logit_scale = 0.0e+00
> llm_load_print_meta: n_ff = 28672
> llm_load_print_meta: n_expert = 0
> llm_load_print_meta: n_expert_used = 0
> llm_load_print_meta: causal attn = 1
> llm_load_print_meta: pooling type = 0
> llm_load_print_meta: rope type = 0
> llm_load_print_meta: rope scaling = linear
> llm_load_print_meta: freq_base_train = 500000.0
> llm_load_print_meta: freq_scale_train = 1
> llm_load_print_meta: n_yarn_orig_ctx = 8192
> llm_load_print_meta: rope_finetuned = unknown
> llm_load_print_meta: ssm_d_conv = 0
> llm_load_print_meta: ssm_d_inner = 0
> llm_load_print_meta: ssm_d_state = 0
> llm_load_print_meta: ssm_dt_rank = 0
> llm_load_print_meta: model type = 70B
> llm_load_print_meta: model ftype = Q4_0
> llm_load_print_meta: model params = 70.55 B
> llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
> llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
> llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
> llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
> llm_load_print_meta: LF token = 128 'Ä'
> llm_load_tensors: ggml ctx size = 0.55 MiB
> llm_load_tensors: offloading 80 repeating layers to GPU
> llm_load_tensors: offloading non-repeating layers to GPU
> llm_load_tensors: offloaded 81/81 layers to GPU
> llm_load_tensors: CPU buffer size = 563.62 MiB
> llm_load_tensors: CUDA0 buffer size = 37546.98 MiB
> ...................................................................................................
> llama_new_context_with_model: n_ctx = 2048
> llama_new_context_with_model: n_batch = 512
> llama_new_context_with_model: n_ubatch = 512
> llama_new_context_with_model: freq_base = 500000.0
> llama_new_context_with_model: freq_scale = 1
> llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB
> llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
> llama_new_context_with_model: CUDA_Host output buffer size = 266.50 MiB
> llama_new_context_with_model: CUDA0 compute buffer size = 324.00 MiB
> llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB
> llama_new_context_with_model: graph nodes = 2644
> llama_new_context_with_model: graph splits = 2
> {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140380754384448","timestamp":1713786234}
> {"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140380754384448","timestamp":1713786234}
> time=2024-04-22T19:43:54.478+08:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
> [GIN] 2024/04/22 - 19:43:54 | 200 | 8.528894398s | 127.0.0.1 | POST "/api/chat"
> {"function":"update_slots","level":"INFO","line":1574,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140373070435904","timestamp":1713786234}
> {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786269}
> {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":93,"slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786269}
> {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786269}
> {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 432.65 ms / 93 tokens ( 4.65 ms per token, 214.95 tokens per second)","n_prompt_tokens_processed":93,"n_tokens_second":214.95484792522348,"slot_id":0,"t_prompt_processing":432.649,"t_token":4.652139784946237,"task_id":0,"tid":"140373070435904","timestamp":1713786305}
> {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 35653.82 ms / 621 runs ( 57.41 ms per token, 17.42 tokens per second)","n_decoded":621,"n_tokens_second":17.417488504738063,"slot_id":0,"t_token":57.41355877616747,"t_token_generation":35653.82,"task_id":0,"tid":"140373070435904","timestamp":1713786305}
> {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 36086.47 ms","slot_id":0,"t_prompt_processing":432.649,"t_token_generation":35653.82,"t_total":36086.469,"task_id":0,"tid":"140373070435904","timestamp":1713786305}
> {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":714,"n_ctx":2048,"n_past":713,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786305,"truncated":false}
> [GIN] 2024/04/22 - 19:45:05 | 200 | 36.088775013s | 127.0.0.1 | POST "/api/chat"
> {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786317}
> {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":92,"n_past_se":0,"n_prompt_tokens_processed":731,"slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786317}
> {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":92,"slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786317}
> {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 1992.45 ms / 731 tokens ( 2.73 ms per token, 366.89 tokens per second)","n_prompt_tokens_processed":731,"n_tokens_second":366.8853591160221,"slot_id":0,"t_prompt_processing":1992.448,"t_token":2.7256470588235295,"task_id":624,"tid":"140373070435904","timestamp":1713786354}
> {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 35060.12 ms / 601 runs ( 58.34 ms per token, 17.14 tokens per second)","n_decoded":601,"n_tokens_second":17.14198207462079,"slot_id":0,"t_token":58.33631114808652,"t_token_generation":35060.123,"task_id":624,"tid":"140373070435904","timestamp":1713786354}
> {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 37052.57 ms","slot_id":0,"t_prompt_processing":1992.448,"t_token_generation":35060.123,"t_total":37052.570999999996,"task_id":624,"tid":"140373070435904","timestamp":1713786354}
> {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1424,"n_ctx":2048,"n_past":1423,"n_system_tokens":0,"slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786354,"truncated":false}
> [GIN] 2024/04/22 - 19:45:54 | 200 | 37.066841427s | 127.0.0.1 | POST "/api/chat"
> {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786425}
> {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":822,"n_past_se":0,"n_prompt_tokens_processed":644,"slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786425}
> {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":822,"slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786425}
> {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 1979.55 ms / 644 tokens ( 3.07 ms per token, 325.33 tokens per second)","n_prompt_tokens_processed":644,"n_tokens_second":325.3261343980861,"slot_id":0,"t_prompt_processing":1979.552,"t_token":3.07383850931677,"task_id":1228,"tid":"140373070435904","timestamp":1713786452}
> {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 24798.05 ms / 418 runs ( 59.33 ms per token, 16.86 tokens per second)","n_decoded":418,"n_tokens_second":16.85616273407282,"slot_id":0,"t_token":59.325483253588516,"t_token_generation":24798.052,"task_id":1228,"tid":"140373070435904","timestamp":1713786452}
> {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 26777.60 ms","slot_id":0,"t_prompt_processing":1979.552,"t_token_generation":24798.052,"t_total":26777.604,"task_id":1228,"tid":"140373070435904","timestamp":1713786452}
> {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1884,"n_ctx":2048,"n_past":1883,"n_system_tokens":0,"slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786452,"truncated":false}
> [GIN] 2024/04/22 - 19:47:32 | 200 | 26.799039944s | 127.0.0.1 | POST "/api/chat"
> time=2024-04-22T19:56:00.986+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
> time=2024-04-22T19:56:01.101+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
> time=2024-04-22T19:56:01.101+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
> time=2024-04-22T19:56:01.101+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
> time=2024-04-22T19:56:01.101+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
> loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so
> time=2024-04-22T19:56:01.101+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so"
> time=2024-04-22T19:56:01.101+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
> llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest))
> llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
> llama_model_loader: - kv 0: general.architecture str = llama
> llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
> llama_model_loader: - kv 2: llama.block_count u32 = 80
> llama_model_loader: - kv 3: llama.context_length u32 = 8192
> llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
> llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
> llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
> llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
> llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
> llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
> llama_model_loader: - kv 10: general.file_type u32 = 2
> llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
> llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
> llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
> llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
> llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
> llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
> llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
> llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
> llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
> llama_model_loader: - kv 20: general.quantization_version u32 = 2
> llama_model_loader: - type f32: 161 tensors
> llama_model_loader: - type q4_0: 561 tensors
> llama_model_loader: - type q6_K: 1 tensors
> llm_load_vocab: special tokens definition check successful ( 256/128256 ).
> llm_load_print_meta: format = GGUF V3 (latest)
> llm_load_print_meta: arch = llama
> llm_load_print_meta: vocab type = BPE
> llm_load_print_meta: n_vocab = 128256
> llm_load_print_meta: n_merges = 280147
> llm_load_print_meta: n_ctx_train = 8192
> llm_load_print_meta: n_embd = 8192
> llm_load_print_meta: n_head = 64
> llm_load_print_meta: n_head_kv = 8
> llm_load_print_meta: n_layer = 80
> llm_load_print_meta: n_rot = 128
> llm_load_print_meta: n_embd_head_k = 128
> llm_load_print_meta: n_embd_head_v = 128
> llm_load_print_meta: n_gqa = 8
> llm_load_print_meta: n_embd_k_gqa = 1024
> llm_load_print_meta: n_embd_v_gqa = 1024
> llm_load_print_meta: f_norm_eps = 0.0e+00
> llm_load_print_meta: f_norm_rms_eps = 1.0e-05
> llm_load_print_meta: f_clamp_kqv = 0.0e+00
> llm_load_print_meta: f_max_alibi_bias = 0.0e+00
> llm_load_print_meta: f_logit_scale = 0.0e+00
> llm_load_print_meta: n_ff = 28672
> llm_load_print_meta: n_expert = 0
> llm_load_print_meta: n_expert_used = 0
> llm_load_print_meta: causal attn = 1
> llm_load_print_meta: pooling type = 0
> llm_load_print_meta: rope type = 0
> llm_load_print_meta: rope scaling = linear
> llm_load_print_meta: freq_base_train = 500000.0
> llm_load_print_meta: freq_scale_train = 1
> llm_load_print_meta: n_yarn_orig_ctx = 8192
> llm_load_print_meta: rope_finetuned = unknown
> llm_load_print_meta: ssm_d_conv = 0
> llm_load_print_meta: ssm_d_inner = 0
> llm_load_print_meta: ssm_d_state = 0
> llm_load_print_meta: ssm_dt_rank = 0
> llm_load_print_meta: model type = 70B
> llm_load_print_meta: model ftype = Q4_0
> llm_load_print_meta: model params = 70.55 B
> llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
> llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
> llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
> llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
> llm_load_print_meta: LF token = 128 'Ä'
> llm_load_tensors: ggml ctx size = 0.55 MiB
> llm_load_tensors: offloading 80 repeating layers to GPU
> llm_load_tensors: offloading non-repeating layers to GPU
> llm_load_tensors: offloaded 81/81 layers to GPU
> llm_load_tensors: CPU buffer size = 563.62 MiB
> llm_load_tensors: CUDA0 buffer size = 37546.98 MiB
> ...................................................................................................
> llama_new_context_with_model: n_ctx = 2048
> llama_new_context_with_model: n_batch = 512
> llama_new_context_with_model: n_ubatch = 512
> llama_new_context_with_model: freq_base = 500000.0
> llama_new_context_with_model: freq_scale = 1
> llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB
> llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
> llama_new_context_with_model: CUDA_Host output buffer size = 266.50 MiB
> llama_new_context_with_model: CUDA0 compute buffer size = 324.00 MiB
> llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB
> llama_new_context_with_model: graph nodes = 2644
> llama_new_context_with_model: graph splits = 2
> {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140377508013632","timestamp":1713786968}
> {"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140377508013632","timestamp":1713786968}
> time=2024-04-22T19:56:08.858+08:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
> {"function":"update_slots","level":"INFO","line":1574,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140373045257792","timestamp":1713786968}
> {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786968}
> {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":2003,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786968}
> {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786968}
> {"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786976}
> {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 5385.63 ms / 2003 tokens ( 2.69 ms per token, 371.92 tokens per second)","n_prompt_tokens_processed":2003,"n_tokens_second":371.9156347539656,"slot_id":0,"t_prompt_processing":5385.63,"t_token":2.6887818272591115,"task_id":0,"tid":"140373045257792","timestamp":1713787009}
> {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 35486.33 ms / 591 runs ( 60.04 ms per token, 16.65 tokens per second)","n_decoded":591,"n_tokens_second":16.654301810384602,"slot_id":0,"t_token":60.04454653130287,"t_token_generation":35486.327,"task_id":0,"tid":"140373045257792","timestamp":1713787009}
> {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 40871.96 ms","slot_id":0,"t_prompt_processing":5385.63,"t_token_generation":35486.327,"t_total":40871.956999999995,"task_id":0,"tid":"140373045257792","timestamp":1713787009}
> {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1571,"n_ctx":2048,"n_past":1570,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713787009,"truncated":true}
> [GIN] 2024/04/22 - 19:56:49 | 200 | 49.6301983s | 127.0.0.1 | POST "/api/chat"
> {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787071}
> {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":1927,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787071}
> {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787071}
> {"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787084}
> {"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787145}
> {"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787206}
> {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 5242.44 ms / 1927 tokens ( 2.72 ms per token, 367.58 tokens per second)","n_prompt_tokens_processed":1927,"n_tokens_second":367.5772102892625,"slot_id":0,"t_prompt_processing":5242.436,"t_token":2.720516865594188,"task_id":594,"tid":"140373045257792","timestamp":1713787227}
> {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 151033.05 ms / 2517 runs ( 60.01 ms per token, 16.67 tokens per second)","n_decoded":2517,"n_tokens_second":16.66522603280454,"slot_id":0,"t_token":60.005186730234406,"t_token_generation":151033.055,"task_id":594,"tid":"140373045257792","timestamp":1713787227}
> {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 156275.49 ms","slot_id":0,"t_prompt_processing":5242.436,"t_token_generation":151033.055,"t_total":156275.49099999998,"task_id":594,"tid":"140373045257792","timestamp":1713787227}
> {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1375,"n_ctx":2048,"n_past":1374,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787227,"truncated":true}
> [GIN] 2024/04/22 - 20:00:27 | 200 | 2m36s | 127.0.0.1 | POST "/api/chat"
> time=2024-04-22T20:07:41.624+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
> time=2024-04-22T20:07:41.624+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
> time=2024-04-22T20:07:41.624+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
> time=2024-04-22T20:07:41.624+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
> time=2024-04-22T20:07:41.624+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
> loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so
> time=2024-04-22T20:07:41.624+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so"
> time=2024-04-22T20:07:41.624+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
> llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest))
> llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
> llama_model_loader: - kv 0: general.architecture str = llama
> llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
> llama_model_loader: - kv 2: llama.block_count u32 = 80
> llama_model_loader: - kv 3: llama.context_length u32 = 8192
> llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
> llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
> llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
> llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
> llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
> llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
> llama_model_loader: - kv 10: general.file_type u32 = 2
> llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
> llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
> llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
> llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
> llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
> llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
> llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
> llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
> llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
> llama_model_loader: - kv 20: general.quantization_version u32 = 2
> llama_model_loader: - type f32: 161 tensors
> llama_model_loader: - type q4_0: 561 tensors
> llama_model_loader: - type q6_K: 1 tensors
> llm_load_vocab: special tokens definition check successful ( 256/128256 ).
> llm_load_print_meta: format = GGUF V3 (latest)
> llm_load_print_meta: arch = llama
> llm_load_print_meta: vocab type = BPE
> llm_load_print_meta: n_vocab = 128256
> llm_load_print_meta: n_merges = 280147
> llm_load_print_meta: n_ctx_train = 8192
> llm_load_print_meta: n_embd = 8192
> llm_load_print_meta: n_head = 64
> llm_load_print_meta: n_head_kv = 8
> llm_load_print_meta: n_layer = 80
> llm_load_print_meta: n_rot = 128
> llm_load_print_meta: n_embd_head_k = 128
> llm_load_print_meta: n_embd_head_v = 128
> llm_load_print_meta: n_gqa = 8
> llm_load_print_meta: n_embd_k_gqa = 1024
> llm_load_print_meta: n_embd_v_gqa = 1024
> llm_load_print_meta: f_norm_eps = 0.0e+00
> llm_load_print_meta: f_norm_rms_eps = 1.0e-05
> llm_load_print_meta: f_clamp_kqv = 0.0e+00
> llm_load_print_meta: f_max_alibi_bias = 0.0e+00
> llm_load_print_meta: f_logit_scale = 0.0e+00
> llm_load_print_meta: n_ff = 28672
> llm_load_print_meta: n_expert = 0
> llm_load_print_meta: n_expert_used = 0
> llm_load_print_meta: causal attn = 1
> llm_load_print_meta: pooling type = 0
> llm_load_print_meta: rope type = 0
> llm_load_print_meta: rope scaling = linear
> llm_load_print_meta: freq_base_train = 500000.0
> llm_load_print_meta: freq_scale_train = 1
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.31
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3819/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5130
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5130/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5130/comments
|
https://api.github.com/repos/ollama/ollama/issues/5130/events
|
https://github.com/ollama/ollama/issues/5130
| 2,361,162,090
|
I_kwDOJ0Z1Ps6MvHlq
| 5,130
|
add MiniCPM-Llama3-V 2.5 muiltmodal model
|
{
"login": "green-dalii",
"id": 7144084,
"node_id": "MDQ6VXNlcjcxNDQwODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7144084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/green-dalii",
"html_url": "https://github.com/green-dalii",
"followers_url": "https://api.github.com/users/green-dalii/followers",
"following_url": "https://api.github.com/users/green-dalii/following{/other_user}",
"gists_url": "https://api.github.com/users/green-dalii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/green-dalii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/green-dalii/subscriptions",
"organizations_url": "https://api.github.com/users/green-dalii/orgs",
"repos_url": "https://api.github.com/users/green-dalii/repos",
"events_url": "https://api.github.com/users/green-dalii/events{/privacy}",
"received_events_url": "https://api.github.com/users/green-dalii/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-19T03:08:00
| 2024-09-02T00:22:09
| 2024-09-02T00:22:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/OpenBMB/MiniCPM-V
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5130/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/ollama/ollama/issues/5130/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5063
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5063/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5063/comments
|
https://api.github.com/repos/ollama/ollama/issues/5063/events
|
https://github.com/ollama/ollama/pull/5063
| 2,354,882,964
|
PR_kwDOJ0Z1Ps5ykCws
| 5,063
|
Add LSP-AI to README
|
{
"login": "SilasMarvin",
"id": 19626586,
"node_id": "MDQ6VXNlcjE5NjI2NTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/19626586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SilasMarvin",
"html_url": "https://github.com/SilasMarvin",
"followers_url": "https://api.github.com/users/SilasMarvin/followers",
"following_url": "https://api.github.com/users/SilasMarvin/following{/other_user}",
"gists_url": "https://api.github.com/users/SilasMarvin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SilasMarvin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SilasMarvin/subscriptions",
"organizations_url": "https://api.github.com/users/SilasMarvin/orgs",
"repos_url": "https://api.github.com/users/SilasMarvin/repos",
"events_url": "https://api.github.com/users/SilasMarvin/events{/privacy}",
"received_events_url": "https://api.github.com/users/SilasMarvin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-06-15T14:10:47
| 2024-09-05T05:17:34
| 2024-09-05T05:17:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5063",
"html_url": "https://github.com/ollama/ollama/pull/5063",
"diff_url": "https://github.com/ollama/ollama/pull/5063.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5063.patch",
"merged_at": "2024-09-05T05:17:34"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5063/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6158
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6158/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6158/comments
|
https://api.github.com/repos/ollama/ollama/issues/6158/events
|
https://github.com/ollama/ollama/issues/6158
| 2,447,061,376
|
I_kwDOJ0Z1Ps6R2zGA
| 6,158
|
Anythingllm says This feature is disabled on Linux operating systems
|
{
"login": "HSTe",
"id": 3778801,
"node_id": "MDQ6VXNlcjM3Nzg4MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3778801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HSTe",
"html_url": "https://github.com/HSTe",
"followers_url": "https://api.github.com/users/HSTe/followers",
"following_url": "https://api.github.com/users/HSTe/following{/other_user}",
"gists_url": "https://api.github.com/users/HSTe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HSTe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HSTe/subscriptions",
"organizations_url": "https://api.github.com/users/HSTe/orgs",
"repos_url": "https://api.github.com/users/HSTe/repos",
"events_url": "https://api.github.com/users/HSTe/events{/privacy}",
"received_events_url": "https://api.github.com/users/HSTe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-04T10:07:10
| 2024-08-09T20:48:45
| 2024-08-09T20:48:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After a reinstallation on my fedora workstation I can't use anythingllm llm provider it says:
"This feature is disabled on Linux operating systems"
This worked on the system setup I had in june.
Has something happend with linux support on Anythingllm?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.3
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6158/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6158/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6848
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6848/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6848/comments
|
https://api.github.com/repos/ollama/ollama/issues/6848/events
|
https://github.com/ollama/ollama/pull/6848
| 2,532,299,681
|
PR_kwDOJ0Z1Ps570wyu
| 6,848
|
llama: gather transitive dependencies for rocm for dist packaging
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-17T22:40:29
| 2024-09-18T15:32:39
| 2024-09-18T15:32:36
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6848",
"html_url": "https://github.com/ollama/ollama/pull/6848",
"diff_url": "https://github.com/ollama/ollama/pull/6848.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6848.patch",
"merged_at": "2024-09-18T15:32:36"
}
|
Go generate equivalent logic is [here](https://github.com/ollama/ollama/blob/main/llm/generate/gen_linux.sh#L283-L290)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6848/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5844
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5844/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5844/comments
|
https://api.github.com/repos/ollama/ollama/issues/5844/events
|
https://github.com/ollama/ollama/issues/5844
| 2,422,247,983
|
I_kwDOJ0Z1Ps6QYJIv
| 5,844
|
Connection refused on registry.ollama.ai
|
{
"login": "jcpraud",
"id": 199520,
"node_id": "MDQ6VXNlcjE5OTUyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/199520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcpraud",
"html_url": "https://github.com/jcpraud",
"followers_url": "https://api.github.com/users/jcpraud/followers",
"following_url": "https://api.github.com/users/jcpraud/following{/other_user}",
"gists_url": "https://api.github.com/users/jcpraud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcpraud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcpraud/subscriptions",
"organizations_url": "https://api.github.com/users/jcpraud/orgs",
"repos_url": "https://api.github.com/users/jcpraud/repos",
"events_url": "https://api.github.com/users/jcpraud/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcpraud/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-22T08:18:01
| 2024-07-23T08:24:56
| 2024-07-23T08:24:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
Since this morning I get a Connection refused error when trying to pull models:
ollama pull nomic-embed-text:137m-v1.5-fp16
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/nomic-embed-text/manifests/137m-v1.5-fp16": dial tcp 172.67.182.229:443: connect: connection refused
I'm behind a proxy which worked well last week (http_proxy and https_proxy env variables are correct)
With my browser (behind the same proxy) I get this:
https://[registry.ollama.ai/v2/library/nomic-embed-text/manifests/137m-v1.5-fp16](https://registry.ollama.ai/v2/library/nomic-embed-text/manifests/137m-v1.5-fp16)
{"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"digest":"sha256:31df23ea7daa448f9ccdbbcecce6c14689c8552222b80defd3830707c0139d4f","mediaType":"application/vnd.docker.container.image.v1+json","size":420},"layers":[{"digest":"sha256:970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6","mediaType":"application/vnd.ollama.image.model","size":274290656},{"digest":"sha256:c71d239df91726fc519c6eb72d318ec65820627232b2f796219e87dcf35d0ab4","mediaType":"application/vnd.ollama.image.license","size":11357},{"digest":"sha256:ce4a164fc04605703b485251fe9f1a181688ba0eb6badb80cc6335c0de17ca0d","mediaType":"application/vnd.ollama.image.params","size":17}]}
I updated Ollama this morning to 0.2.7, the script worked well.
Did I miss something?
Thanks
JC
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
0.2.7
|
{
"login": "jcpraud",
"id": 199520,
"node_id": "MDQ6VXNlcjE5OTUyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/199520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcpraud",
"html_url": "https://github.com/jcpraud",
"followers_url": "https://api.github.com/users/jcpraud/followers",
"following_url": "https://api.github.com/users/jcpraud/following{/other_user}",
"gists_url": "https://api.github.com/users/jcpraud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcpraud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcpraud/subscriptions",
"organizations_url": "https://api.github.com/users/jcpraud/orgs",
"repos_url": "https://api.github.com/users/jcpraud/repos",
"events_url": "https://api.github.com/users/jcpraud/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcpraud/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5844/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1143
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1143/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1143/comments
|
https://api.github.com/repos/ollama/ollama/issues/1143/events
|
https://github.com/ollama/ollama/issues/1143
| 1,995,473,207
|
I_kwDOJ0Z1Ps528IE3
| 1,143
|
Enhancement: Add support for uploading/creating Modelfile via REST API
|
{
"login": "amithkoujalgi",
"id": 1876165,
"node_id": "MDQ6VXNlcjE4NzYxNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1876165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amithkoujalgi",
"html_url": "https://github.com/amithkoujalgi",
"followers_url": "https://api.github.com/users/amithkoujalgi/followers",
"following_url": "https://api.github.com/users/amithkoujalgi/following{/other_user}",
"gists_url": "https://api.github.com/users/amithkoujalgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amithkoujalgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amithkoujalgi/subscriptions",
"organizations_url": "https://api.github.com/users/amithkoujalgi/orgs",
"repos_url": "https://api.github.com/users/amithkoujalgi/repos",
"events_url": "https://api.github.com/users/amithkoujalgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/amithkoujalgi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-15T19:59:05
| 2023-11-16T00:41:16
| 2023-11-16T00:41:16
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Thanks for creating the REST APIs for interacting with Ollama.
I was wondering if it would be a good idea to introduce another API to upload/create a new Modelfile.
For example:
```
curl -X POST http://localhost:11434/api/upload -d '{
"name": "my-model-name",
"modelfile": "<contents of the modelfile here>",
"path": "/some/path"
}'
```
`path` could be a filesystem path relative to a predefined directory (which could be configured from Ollama's config.
It would be great to have this API as it could simplify custom model creation, especially in remotely running Ollama servers (such as docker deployments).
If it makes sense to have the above API, we could also have APIs to update, delete, and list these modelfiles as well.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1143/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6283
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6283/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6283/comments
|
https://api.github.com/repos/ollama/ollama/issues/6283/events
|
https://github.com/ollama/ollama/issues/6283
| 2,457,727,340
|
I_kwDOJ0Z1Ps6SffFs
| 6,283
|
attempt to load llama 3.1 on system with insufficient system memory and crash with host alloc failure
|
{
"login": "razvanab",
"id": 2854730,
"node_id": "MDQ6VXNlcjI4NTQ3MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2854730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/razvanab",
"html_url": "https://github.com/razvanab",
"followers_url": "https://api.github.com/users/razvanab/followers",
"following_url": "https://api.github.com/users/razvanab/following{/other_user}",
"gists_url": "https://api.github.com/users/razvanab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/razvanab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/razvanab/subscriptions",
"organizations_url": "https://api.github.com/users/razvanab/orgs",
"repos_url": "https://api.github.com/users/razvanab/repos",
"events_url": "https://api.github.com/users/razvanab/events{/privacy}",
"received_events_url": "https://api.github.com/users/razvanab/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-08-09T11:37:18
| 2024-08-10T08:49:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I get this error when I am trying to load this model. Other llama 3.1 models in the Ollama library work great.
(base) PS C:\Users\razva> ollama run CognitiveComputations/dolphin-llama3.1
Error: llama runner process has terminated: error:failed to create context with model 'C:\Users\razva\.ollama\blobs\sha256-c4e04968e3ca697b947c4820d7d4e58873e9f93908a043e7280863b31019b7df'
[verbose.txt](https://github.com/user-attachments/files/16560593/verbose.txt)
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.4
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6283/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4180
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4180/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4180/comments
|
https://api.github.com/repos/ollama/ollama/issues/4180/events
|
https://github.com/ollama/ollama/issues/4180
| 2,279,707,944
|
I_kwDOJ0Z1Ps6H4ZUo
| 4,180
|
account for common config/data locations when OLLAMA_MODELS is unset
|
{
"login": "asmrtfm",
"id": 154548075,
"node_id": "U_kgDOCTY3aw",
"avatar_url": "https://avatars.githubusercontent.com/u/154548075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asmrtfm",
"html_url": "https://github.com/asmrtfm",
"followers_url": "https://api.github.com/users/asmrtfm/followers",
"following_url": "https://api.github.com/users/asmrtfm/following{/other_user}",
"gists_url": "https://api.github.com/users/asmrtfm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asmrtfm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asmrtfm/subscriptions",
"organizations_url": "https://api.github.com/users/asmrtfm/orgs",
"repos_url": "https://api.github.com/users/asmrtfm/repos",
"events_url": "https://api.github.com/users/asmrtfm/events{/privacy}",
"received_events_url": "https://api.github.com/users/asmrtfm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-05T20:09:02
| 2024-05-14T19:15:10
| 2024-05-14T19:15:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Many, probably most, projects out there which interface with ollama - such as open-webui and privateGPT end up setting the OLLAMA_MODELS variable thus saving models in an alternate location - usually within the users home directory.
This can become problematic considering the chronic instability of these projects where a single variance in one's environment can lead to edge cases that are undocumented.
I encountered one such issue and resolved to just use ollama itself in the terminal and not worry about a gui on top of it.
I pulled down like a hundred gigabytes of models with `ollama pull <MODEL>`.
I used them in the terminal for a while (`ollama run <MODEL>`)
After restarting my machine the models are no longer showing up.
I check the docs, they say they will be under /usr/share/ollama/.ollama/models/registry.ollama.ai/...
I freaked out, but then realized what had happened.
However, I thought it was worth pointing out that there is a long-standing convention for programs to look for configs/data/whatever in additional common locations, such as ~/.ollama
Just a thought. Not a big deal.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4180/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4180/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3700
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3700/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3700/comments
|
https://api.github.com/repos/ollama/ollama/issues/3700/events
|
https://github.com/ollama/ollama/pull/3700
| 2,248,354,679
|
PR_kwDOJ0Z1Ps5s7TNG
| 3,700
|
Update README.md to include acknowledgements to llama.cpp
|
{
"login": "survirtual",
"id": 20385618,
"node_id": "MDQ6VXNlcjIwMzg1NjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/20385618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/survirtual",
"html_url": "https://github.com/survirtual",
"followers_url": "https://api.github.com/users/survirtual/followers",
"following_url": "https://api.github.com/users/survirtual/following{/other_user}",
"gists_url": "https://api.github.com/users/survirtual/gists{/gist_id}",
"starred_url": "https://api.github.com/users/survirtual/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/survirtual/subscriptions",
"organizations_url": "https://api.github.com/users/survirtual/orgs",
"repos_url": "https://api.github.com/users/survirtual/repos",
"events_url": "https://api.github.com/users/survirtual/events{/privacy}",
"received_events_url": "https://api.github.com/users/survirtual/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-04-17T13:50:52
| 2024-04-17T19:12:55
| 2024-04-17T17:48:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3700",
"html_url": "https://github.com/ollama/ollama/pull/3700",
"diff_url": "https://github.com/ollama/ollama/pull/3700.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3700.patch",
"merged_at": null
}
|
resolves #3697
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3700/reactions",
"total_count": 14,
"+1": 14,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3700/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6798
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6798/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6798/comments
|
https://api.github.com/repos/ollama/ollama/issues/6798/events
|
https://github.com/ollama/ollama/issues/6798
| 2,526,018,900
|
I_kwDOJ0Z1Ps6Wj_1U
| 6,798
|
Running the model under jetpack6 failed
|
{
"login": "litao-zhx",
"id": 76981650,
"node_id": "MDQ6VXNlcjc2OTgxNjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/76981650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/litao-zhx",
"html_url": "https://github.com/litao-zhx",
"followers_url": "https://api.github.com/users/litao-zhx/followers",
"following_url": "https://api.github.com/users/litao-zhx/following{/other_user}",
"gists_url": "https://api.github.com/users/litao-zhx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/litao-zhx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/litao-zhx/subscriptions",
"organizations_url": "https://api.github.com/users/litao-zhx/orgs",
"repos_url": "https://api.github.com/users/litao-zhx/repos",
"events_url": "https://api.github.com/users/litao-zhx/events{/privacy}",
"received_events_url": "https://api.github.com/users/litao-zhx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-09-14T06:08:19
| 2024-09-25T21:24:49
| 2024-09-25T21:24:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

The error message is shown in the above figure. After the download is completed, the runtime keeps timeout
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6798/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/381
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/381/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/381/comments
|
https://api.github.com/repos/ollama/ollama/issues/381/events
|
https://github.com/ollama/ollama/pull/381
| 1,857,048,115
|
PR_kwDOJ0Z1Ps5YREnw
| 381
|
retry on unauthorized chunk push
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-18T17:24:22
| 2023-08-18T20:49:26
| 2023-08-18T20:49:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/381",
"html_url": "https://github.com/ollama/ollama/pull/381",
"diff_url": "https://github.com/ollama/ollama/pull/381.diff",
"patch_url": "https://github.com/ollama/ollama/pull/381.patch",
"merged_at": "2023-08-18T20:49:25"
}
|
The token printed for authorized requests has a lifetime of 1h. If an upload exceeds 1h, a chunk push will fail since the token is only created on a "start upload" request.
This replaces the Pipe with SectionReader which is simpler and implements Seek, a requirement for makeRequestWithRetry. This is slightly worse than using a Pipe since the progress update is directly tied to the chunk size instead of controlled separately, i.e. increasing chunk size will decrease how often the client gets an progress update
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/381/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4280
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4280/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4280/comments
|
https://api.github.com/repos/ollama/ollama/issues/4280/events
|
https://github.com/ollama/ollama/issues/4280
| 2,287,256,732
|
I_kwDOJ0Z1Ps6IVMSc
| 4,280
|
Error: pull model manifest: file does not exist
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-05-09T09:12:28
| 2024-05-10T12:17:13
| 2024-05-10T12:17:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
@dhiltgen
I have push this model to ollama.com, but why it can not run? reported error is "Error: pull model manifest: file does not exist"
ollama run taozhiyuai/openbiollm-llama-3-chinese
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4280/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3328
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3328/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3328/comments
|
https://api.github.com/repos/ollama/ollama/issues/3328/events
|
https://github.com/ollama/ollama/issues/3328
| 2,204,462,973
|
I_kwDOJ0Z1Ps6DZW99
| 3,328
|
/set format json does not appear to work over the API.
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-24T17:16:26
| 2024-03-27T22:39:24
| 2024-03-27T22:39:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I would like to set JSON mode, via the API or in the model file. I tried to insert the command right after the system prompt in the template,
didn't work.
Then I made it part of the system prompt.
didn't work.
### What did you expect to see?
When "/set format json" appears anywhere, either in the system prompt, or as part of user query, I expect the mode to be changed to JSON, and the output be in JSON format.
### Steps to reproduce
It is obvious. I cannot put it anywhere in the model file to make it change the mode.
### Are there any recent changes that introduced the issue?
no.
### OS
Linux
### Architecture
amd64
### Platform
Docker
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3328/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3328/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2715
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2715/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2715/comments
|
https://api.github.com/repos/ollama/ollama/issues/2715/events
|
https://github.com/ollama/ollama/issues/2715
| 2,151,676,999
|
I_kwDOJ0Z1Ps6AP_xH
| 2,715
|
getting error while running gemma model
|
{
"login": "rootUJ99",
"id": 22429625,
"node_id": "MDQ6VXNlcjIyNDI5NjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22429625?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rootUJ99",
"html_url": "https://github.com/rootUJ99",
"followers_url": "https://api.github.com/users/rootUJ99/followers",
"following_url": "https://api.github.com/users/rootUJ99/following{/other_user}",
"gists_url": "https://api.github.com/users/rootUJ99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rootUJ99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rootUJ99/subscriptions",
"organizations_url": "https://api.github.com/users/rootUJ99/orgs",
"repos_url": "https://api.github.com/users/rootUJ99/repos",
"events_url": "https://api.github.com/users/rootUJ99/events{/privacy}",
"received_events_url": "https://api.github.com/users/rootUJ99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-23T19:28:00
| 2024-02-23T19:31:36
| 2024-02-23T19:31:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
`Error: Post "http://127.0.0.1:11434/api/chat": EOF`
getting this error after running `ollama run gemma`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2715/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7249
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7249/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7249/comments
|
https://api.github.com/repos/ollama/ollama/issues/7249/events
|
https://github.com/ollama/ollama/pull/7249
| 2,596,308,239
|
PR_kwDOJ0Z1Ps5_C8Lt
| 7,249
|
fix #7247 - invalid image input
|
{
"login": "ozbillwang",
"id": 8954908,
"node_id": "MDQ6VXNlcjg5NTQ5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8954908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ozbillwang",
"html_url": "https://github.com/ozbillwang",
"followers_url": "https://api.github.com/users/ozbillwang/followers",
"following_url": "https://api.github.com/users/ozbillwang/following{/other_user}",
"gists_url": "https://api.github.com/users/ozbillwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ozbillwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ozbillwang/subscriptions",
"organizations_url": "https://api.github.com/users/ozbillwang/orgs",
"repos_url": "https://api.github.com/users/ozbillwang/repos",
"events_url": "https://api.github.com/users/ozbillwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ozbillwang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-18T03:29:29
| 2024-10-23T17:31:04
| 2024-10-23T17:31:04
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7249",
"html_url": "https://github.com/ollama/ollama/pull/7249",
"diff_url": "https://github.com/ollama/ollama/pull/7249.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7249.patch",
"merged_at": "2024-10-23T17:31:04"
}
|
Fix #7247 - invalid image input
The only change is to update `image_url` with prefix `data:image/png;base64,`
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7249/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7028
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7028/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7028/comments
|
https://api.github.com/repos/ollama/ollama/issues/7028/events
|
https://github.com/ollama/ollama/issues/7028
| 2,554,731,226
|
I_kwDOJ0Z1Ps6YRhra
| 7,028
|
Add gguf cerebras/Llama3-DocChat-1.0-8B to ollama
|
{
"login": "djaffer",
"id": 5740725,
"node_id": "MDQ6VXNlcjU3NDA3MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5740725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/djaffer",
"html_url": "https://github.com/djaffer",
"followers_url": "https://api.github.com/users/djaffer/followers",
"following_url": "https://api.github.com/users/djaffer/following{/other_user}",
"gists_url": "https://api.github.com/users/djaffer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/djaffer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/djaffer/subscriptions",
"organizations_url": "https://api.github.com/users/djaffer/orgs",
"repos_url": "https://api.github.com/users/djaffer/repos",
"events_url": "https://api.github.com/users/djaffer/events{/privacy}",
"received_events_url": "https://api.github.com/users/djaffer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-29T06:20:02
| 2024-09-29T06:26:11
| 2024-09-29T06:26:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can this model be added?
https://huggingface.co/bartowski/Llama3-DocChat-1.0-8B-GGUF
https://huggingface.co/bartowski/Llama3-DocChat-1.0-8B-GGUF
```
FROM ./Llama3-DocChat-1.0-8B.Q6_K.gguf
TEMPLATE "{{ if .System }}System: {{ .System }}
{{ end }}{{ if .Prompt }}User: {{ .Prompt }}
{{ end }}Assistant: <|begin_of_text|>{{ .Response }}
"
```
I tried adding but it hallucinates.

|
{
"login": "djaffer",
"id": 5740725,
"node_id": "MDQ6VXNlcjU3NDA3MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5740725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/djaffer",
"html_url": "https://github.com/djaffer",
"followers_url": "https://api.github.com/users/djaffer/followers",
"following_url": "https://api.github.com/users/djaffer/following{/other_user}",
"gists_url": "https://api.github.com/users/djaffer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/djaffer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/djaffer/subscriptions",
"organizations_url": "https://api.github.com/users/djaffer/orgs",
"repos_url": "https://api.github.com/users/djaffer/repos",
"events_url": "https://api.github.com/users/djaffer/events{/privacy}",
"received_events_url": "https://api.github.com/users/djaffer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7028/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3645
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3645/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3645/comments
|
https://api.github.com/repos/ollama/ollama/issues/3645/events
|
https://github.com/ollama/ollama/issues/3645
| 2,243,095,589
|
I_kwDOJ0Z1Ps6Fsuwl
| 3,645
|
keep_alive doesn't work for OpenAI API
|
{
"login": "longcw",
"id": 6198400,
"node_id": "MDQ6VXNlcjYxOTg0MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6198400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/longcw",
"html_url": "https://github.com/longcw",
"followers_url": "https://api.github.com/users/longcw/followers",
"following_url": "https://api.github.com/users/longcw/following{/other_user}",
"gists_url": "https://api.github.com/users/longcw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/longcw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/longcw/subscriptions",
"organizations_url": "https://api.github.com/users/longcw/orgs",
"repos_url": "https://api.github.com/users/longcw/repos",
"events_url": "https://api.github.com/users/longcw/events{/privacy}",
"received_events_url": "https://api.github.com/users/longcw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-15T08:57:25
| 2024-06-10T07:32:13
| 2024-04-15T19:11:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
`keep_alive` doesn't work for OpenAI API. When set the `keep_alive` as 0 in OpenAI API call through http://localhost:11434/v1/chat/completions, the model was not unloaded after the call.
### What did you expect to see?
The model should be unloaded when set keep_alive 0
### Steps to reproduce
```bash
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama2",
"keep_alive": 0,
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
```
The following works correctly
```bash
curl http://localhost:11434/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "llama2",
"keep_alive": 0,
"stream": false,
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
```
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.27
### GPU
Nvidia
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3645/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2847
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2847/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2847/comments
|
https://api.github.com/repos/ollama/ollama/issues/2847/events
|
https://github.com/ollama/ollama/issues/2847
| 2,162,301,782
|
I_kwDOJ0Z1Ps6A4htW
| 2,847
|
Add custom tags to models to organise models
|
{
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/users/trymeouteh/followers",
"following_url": "https://api.github.com/users/trymeouteh/following{/other_user}",
"gists_url": "https://api.github.com/users/trymeouteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trymeouteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trymeouteh/subscriptions",
"organizations_url": "https://api.github.com/users/trymeouteh/orgs",
"repos_url": "https://api.github.com/users/trymeouteh/repos",
"events_url": "https://api.github.com/users/trymeouteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/trymeouteh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-03-01T00:10:18
| 2024-03-01T00:10:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Lets say you have dozens of models installed and you have several models for coding assistant, several models for chatbots.
In the programs that use the models from Ollama, it will likely ask you to choose from all the models you have installed in Ollama which would be lots to sort through.
Why not add a feature were users can add tags to models to organise them. Users can add the "coding" tag to models that are used as coding assistants. Users can add the tag "chat" to models used for chat bots.
And in these other programs on the computer if they integrate this tagging feature. It will only list models with certain tags. This way in your IDE code editor, it will only list the code assist models and not all the other models.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2847/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2847/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8663
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8663/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8663/comments
|
https://api.github.com/repos/ollama/ollama/issues/8663/events
|
https://github.com/ollama/ollama/pull/8663
| 2,818,399,336
|
PR_kwDOJ0Z1Ps6JXytP
| 8,663
|
Update README.md Adding DeepSeek to the table of models
|
{
"login": "teymuur",
"id": 64795612,
"node_id": "MDQ6VXNlcjY0Nzk1NjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/64795612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/teymuur",
"html_url": "https://github.com/teymuur",
"followers_url": "https://api.github.com/users/teymuur/followers",
"following_url": "https://api.github.com/users/teymuur/following{/other_user}",
"gists_url": "https://api.github.com/users/teymuur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/teymuur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/teymuur/subscriptions",
"organizations_url": "https://api.github.com/users/teymuur/orgs",
"repos_url": "https://api.github.com/users/teymuur/repos",
"events_url": "https://api.github.com/users/teymuur/events{/privacy}",
"received_events_url": "https://api.github.com/users/teymuur/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-29T14:34:27
| 2025-01-30T05:12:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8663",
"html_url": "https://github.com/ollama/ollama/pull/8663",
"diff_url": "https://github.com/ollama/ollama/pull/8663.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8663.patch",
"merged_at": null
}
|
This is just a minor change, I added DeepSeek R1 to the model library table. Only changed `README.md`.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8663/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/76
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/76/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/76/comments
|
https://api.github.com/repos/ollama/ollama/issues/76/events
|
https://github.com/ollama/ollama/pull/76
| 1,802,015,062
|
PR_kwDOJ0Z1Ps5VXeZ0
| 76
|
fix pull race
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-13T02:07:48
| 2023-07-13T02:21:16
| 2023-07-13T02:21:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/76",
"html_url": "https://github.com/ollama/ollama/pull/76",
"diff_url": "https://github.com/ollama/ollama/pull/76.diff",
"patch_url": "https://github.com/ollama/ollama/pull/76.patch",
"merged_at": "2023-07-13T02:21:13"
}
|
Fixes #75
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/76/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/76/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5330
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5330/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5330/comments
|
https://api.github.com/repos/ollama/ollama/issues/5330/events
|
https://github.com/ollama/ollama/issues/5330
| 2,378,426,464
|
I_kwDOJ0Z1Ps6Nw-hg
| 5,330
|
Ollama installation on WSL (Ubuntu 24.04) fails with certificate problem
|
{
"login": "tagwato",
"id": 11979069,
"node_id": "MDQ6VXNlcjExOTc5MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/11979069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tagwato",
"html_url": "https://github.com/tagwato",
"followers_url": "https://api.github.com/users/tagwato/followers",
"following_url": "https://api.github.com/users/tagwato/following{/other_user}",
"gists_url": "https://api.github.com/users/tagwato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tagwato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tagwato/subscriptions",
"organizations_url": "https://api.github.com/users/tagwato/orgs",
"repos_url": "https://api.github.com/users/tagwato/repos",
"events_url": "https://api.github.com/users/tagwato/events{/privacy}",
"received_events_url": "https://api.github.com/users/tagwato/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2024-06-27T15:09:06
| 2024-08-04T01:19:23
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
Could not install Ollama.
Operating system:
- Windos Subsystem for Linux (WSL2)
- Installed distro: Ubuntu 24.04)
Command executed, as explained in https://github.com/ollama/ollama
curl -fsSL https://ollama.com/install.sh | sh
Gives the following output, with error about a certificate problem.
user@WK-325467:~$ curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
######################################################################## 100.0%#=#=# curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
Reading the https://curl.se/docs/sslcerts.html explains that the problem is about certificates but it's not clear if it is a server or local problem (the error message is ambiguous).
Tried also, without success:
curl --insecure -fsSL https://ollama.com/install.sh | sh
And this didn't help either:
sudo apt-get install ca-certificates -y
It seems that the Ollama server certificate is not found in my system (but I'm not sure).
===== EDITED =====
Found a workaround.
The problem is not with the visible curl command.
It occurs inside the install.sh script (internal curl commands).
Edited the install.sh file, changed all curl commands with "curl -k" (insecure) and... it WORKED.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5330/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5583
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5583/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5583/comments
|
https://api.github.com/repos/ollama/ollama/issues/5583/events
|
https://github.com/ollama/ollama/pull/5583
| 2,399,335,425
|
PR_kwDOJ0Z1Ps505HAb
| 5,583
|
Fix context exhaustion integration test for small gpus
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-09T22:30:40
| 2024-07-20T22:48:24
| 2024-07-20T22:48:21
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5583",
"html_url": "https://github.com/ollama/ollama/pull/5583",
"diff_url": "https://github.com/ollama/ollama/pull/5583.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5583.patch",
"merged_at": "2024-07-20T22:48:21"
}
|
On the smaller GPUs, the initial model load of llama2 took over 30s (the default timeout for the DoGenerate helper)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5583/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4367
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4367/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4367/comments
|
https://api.github.com/repos/ollama/ollama/issues/4367/events
|
https://github.com/ollama/ollama/issues/4367
| 2,291,070,477
|
I_kwDOJ0Z1Ps6IjvYN
| 4,367
|
better docs on python library settings
|
{
"login": "nikhil-swamix",
"id": 54004431,
"node_id": "MDQ6VXNlcjU0MDA0NDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/54004431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikhil-swamix",
"html_url": "https://github.com/nikhil-swamix",
"followers_url": "https://api.github.com/users/nikhil-swamix/followers",
"following_url": "https://api.github.com/users/nikhil-swamix/following{/other_user}",
"gists_url": "https://api.github.com/users/nikhil-swamix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikhil-swamix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikhil-swamix/subscriptions",
"organizations_url": "https://api.github.com/users/nikhil-swamix/orgs",
"repos_url": "https://api.github.com/users/nikhil-swamix/repos",
"events_url": "https://api.github.com/users/nikhil-swamix/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikhil-swamix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 7706485628,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1ejfA",
"url": "https://api.github.com/repos/ollama/ollama/labels/python",
"name": "python",
"color": "59642B",
"default": false,
"description": "relating to the ollama-python client library"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-05-11T21:24:29
| 2024-12-02T09:15:08
| 2024-12-02T08:04:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
many options in source reg GPU setting `C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\ollama\_types.py` are useless, ive tried tweaking all of them!, please improve documentation on it only the mlock, mmap options work for resource allocation settings. GPU are completely opaque , usually selects most idle gpu. and change of any model init options `aka load time` options restart the server, is there any HOT reload setting? like `/set` in interactive mode? for the model init prams? or not feasible as once initialized , cant be changed?
reference of file which im talking about. num_gpu and main_gpu
```
class Options(TypedDict, total=False):
# load time options
numa: bool
num_ctx: int
num_batch: int
num_gpu: int
main_gpu: int
low_vram: bool
f16_kv: bool
logits_all: bool
vocab_only: bool
use_mmap: bool
use_mlock: bool
embedding_only: bool
num_thread: int
# runtime options
num_keep: int
seed: int
num_predict: int
top_k: int
top_p: float
tfs_z: float
typical_p: float
repeat_last_n: int
temperature: float
repeat_penalty: float
presence_penalty: float
frequency_penalty: float
mirostat: int
mirostat_tau: float
mirostat_eta: float
penalize_newline: bool
stop: Sequence[str]
```
# suggestion
```
Suggestion: can it be nested like:
settings = {
"cpu":{ a:b},
"gpu":{c:d},
"llamacpp":{more_settings}
}
# and few aliases like
max_tokens=num_ctx
# and converters to human friendly like
num_ctx="32k" -> 32,000
```
_____
# however
`from llama_cpp import Llama`
this thing works when we set, using `os.environ["CUDA_VISIBLE_DEVICES"] = "1"` and various other options. please add features appropriately, using is programmatically or using open AI client is very difficult, for example if we want to customize the parameters like mirostat, using profiles like ("creative", "precise","balanced") like copilot, then we need first class support. the `ollama.chat ` method shows very little documentation in docstring.
# question
may i work on improving documentation and more customization as raw settings provided by llama.cpp ? let me know. ollama provides good levvel of automation but has customization issues.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4367/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4367/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3005
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3005/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3005/comments
|
https://api.github.com/repos/ollama/ollama/issues/3005/events
|
https://github.com/ollama/ollama/pull/3005
| 2,176,436,838
|
PR_kwDOJ0Z1Ps5pGoYK
| 3,005
|
fix: allow importing a model from name reference
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-08T17:06:04
| 2024-03-08T17:27:48
| 2024-03-08T17:27:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3005",
"html_url": "https://github.com/ollama/ollama/pull/3005",
"diff_url": "https://github.com/ollama/ollama/pull/3005.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3005.patch",
"merged_at": "2024-03-08T17:27:47"
}
|
fixes #3003
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3005/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/492
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/492/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/492/comments
|
https://api.github.com/repos/ollama/ollama/issues/492/events
|
https://github.com/ollama/ollama/pull/492
| 1,886,761,520
|
PR_kwDOJ0Z1Ps5Z1DR7
| 492
|
fix nil pointer dereference
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-08T00:24:48
| 2023-09-08T00:25:24
| 2023-09-08T00:25:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/492",
"html_url": "https://github.com/ollama/ollama/pull/492",
"diff_url": "https://github.com/ollama/ollama/pull/492.diff",
"patch_url": "https://github.com/ollama/ollama/pull/492.patch",
"merged_at": "2023-09-08T00:25:23"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/492/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8098
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8098/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8098/comments
|
https://api.github.com/repos/ollama/ollama/issues/8098/events
|
https://github.com/ollama/ollama/issues/8098
| 2,739,985,117
|
I_kwDOJ0Z1Ps6jUNrd
| 8,098
|
current device: 0, in function ggml_backend_cuda_synchronize at llama/ggml-cuda/ggml-cuda.cu:2317
|
{
"login": "kingszun",
"id": 55080597,
"node_id": "MDQ6VXNlcjU1MDgwNTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/55080597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingszun",
"html_url": "https://github.com/kingszun",
"followers_url": "https://api.github.com/users/kingszun/followers",
"following_url": "https://api.github.com/users/kingszun/following{/other_user}",
"gists_url": "https://api.github.com/users/kingszun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingszun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingszun/subscriptions",
"organizations_url": "https://api.github.com/users/kingszun/orgs",
"repos_url": "https://api.github.com/users/kingszun/repos",
"events_url": "https://api.github.com/users/kingszun/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingszun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-12-14T16:45:27
| 2024-12-15T13:15:47
| 2024-12-15T13:15:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
My usage method is
I pulled llama3.2:3b with the cpu version from the local computer
After saving /root/.ollama/ to a private NAS
In a situation where the model was copied from the NAS to /root/.ollama/ and executed on a private H100 server without external internet
```
CUDA error: an illegal memory access was encountered
current device: 0, in function ggml_backend_cuda_synchronize at llama/ggml-cuda/ggml-cuda.cu:2317
cudaStreamSynchronize(cuda_ctx->stream())
llama/ggml-cuda/ggml-cuda.cu:96: CUDA error
SIGSEGV: segmentation violation
PC=0x7f7849b360d7 m=7 sigcode=1 addr=0x204c03fd8
signal arrived during cgo execution
```
command action is
```
ollama run llama3.2:3b
>>> hi
HowError: an error was encountered while running the model: CUDA error
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
ollama version is 0.5.2-rc3-0-g581a4a5-dirty
|
{
"login": "kingszun",
"id": 55080597,
"node_id": "MDQ6VXNlcjU1MDgwNTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/55080597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingszun",
"html_url": "https://github.com/kingszun",
"followers_url": "https://api.github.com/users/kingszun/followers",
"following_url": "https://api.github.com/users/kingszun/following{/other_user}",
"gists_url": "https://api.github.com/users/kingszun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingszun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingszun/subscriptions",
"organizations_url": "https://api.github.com/users/kingszun/orgs",
"repos_url": "https://api.github.com/users/kingszun/repos",
"events_url": "https://api.github.com/users/kingszun/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingszun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8098/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1446
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1446/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1446/comments
|
https://api.github.com/repos/ollama/ollama/issues/1446/events
|
https://github.com/ollama/ollama/issues/1446
| 2,033,727,649
|
I_kwDOJ0Z1Ps55ODih
| 1,446
|
letsencrypt certificates installed but get error on https
|
{
"login": "itscvenk",
"id": 117738376,
"node_id": "U_kgDOBwSLiA",
"avatar_url": "https://avatars.githubusercontent.com/u/117738376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itscvenk",
"html_url": "https://github.com/itscvenk",
"followers_url": "https://api.github.com/users/itscvenk/followers",
"following_url": "https://api.github.com/users/itscvenk/following{/other_user}",
"gists_url": "https://api.github.com/users/itscvenk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itscvenk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itscvenk/subscriptions",
"organizations_url": "https://api.github.com/users/itscvenk/orgs",
"repos_url": "https://api.github.com/users/itscvenk/repos",
"events_url": "https://api.github.com/users/itscvenk/events{/privacy}",
"received_events_url": "https://api.github.com/users/itscvenk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-09T08:30:03
| 2023-12-09T18:08:40
| 2023-12-09T18:08:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello
I saw https://github.com/jmorganca/ollama/pull/1310#issue-2015690206 which says
`Place cert.pem and key.pem into ~/.ollama/ssl/ restart server. It will come up in SSL mode. Remove, rename or delete files to disable ssl mode.`
I had generated the letsencrypt self signed certificates and copied them into /usr/share/ollama/.ollama (as I had followed the manual instructions for installing Ollama). I had done a chown ollama:ollama for both the files I had copied into the above folder. later, did systemctl daemon-reload as well as systemctl restart ollama and rebooted my Ubuntu 20 vm for good measure as well
And then for a Curl with a http request, I get a response. All is well
For a https request, i get
`curl https://mysubdomain.mydomain.com:11434/api/generate -d '{
> "model": "openchat",
> "stream": false,
> "prompt": "Hello"
> }'
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number`
and journalctl -u ollama shows no logs for this :-( . Obviously because the request never reached Ollama.
Usually the above error occurs when there is a conflict in the port or if the port is not open, etc. Is the SSL configured on a different port, other than 11434?
How do I get SSL to work please?
Thanks
Edit:
Note:
I had removed the id_ed file there as well as the one with the .pub extension and restarted the daemon and the service: then I see the following in the logs:
`Dec 09 14:02:27 mysubdomain.mydomain.com ollama[2971]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Dec 09 14:02:27 mysubdomain.mydomain.com ollama[2971]: Your new public key is:
Dec 09 14:02:27 mysubdomain.mydomain.com ollama[2971]: ssh-ed25519 AA<<truncated>>Y`
But the error remains on https
|
{
"login": "itscvenk",
"id": 117738376,
"node_id": "U_kgDOBwSLiA",
"avatar_url": "https://avatars.githubusercontent.com/u/117738376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itscvenk",
"html_url": "https://github.com/itscvenk",
"followers_url": "https://api.github.com/users/itscvenk/followers",
"following_url": "https://api.github.com/users/itscvenk/following{/other_user}",
"gists_url": "https://api.github.com/users/itscvenk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itscvenk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itscvenk/subscriptions",
"organizations_url": "https://api.github.com/users/itscvenk/orgs",
"repos_url": "https://api.github.com/users/itscvenk/repos",
"events_url": "https://api.github.com/users/itscvenk/events{/privacy}",
"received_events_url": "https://api.github.com/users/itscvenk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1446/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3279
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3279/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3279/comments
|
https://api.github.com/repos/ollama/ollama/issues/3279/events
|
https://github.com/ollama/ollama/issues/3279
| 2,199,254,464
|
I_kwDOJ0Z1Ps6DFfXA
| 3,279
|
Mount model into pvc to get faster boot with init container
|
{
"login": "didlawowo",
"id": 12622760,
"node_id": "MDQ6VXNlcjEyNjIyNzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/12622760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/didlawowo",
"html_url": "https://github.com/didlawowo",
"followers_url": "https://api.github.com/users/didlawowo/followers",
"following_url": "https://api.github.com/users/didlawowo/following{/other_user}",
"gists_url": "https://api.github.com/users/didlawowo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/didlawowo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/didlawowo/subscriptions",
"organizations_url": "https://api.github.com/users/didlawowo/orgs",
"repos_url": "https://api.github.com/users/didlawowo/repos",
"events_url": "https://api.github.com/users/didlawowo/events{/privacy}",
"received_events_url": "https://api.github.com/users/didlawowo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-21T06:27:10
| 2024-03-22T15:19:10
| 2024-03-22T15:19:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
when you're trying to use big model like 70b it take à lot of time to start with downloading. actually isn't possible to build ollama image with model inside
should be a good idea to create a bootstrap volume who download model into pvc
then when you. update your pod u don't have to download all model and just keep the pvc mounted in the previous pod
### How should we solve this?
provide init container pvc loading model that describe model should be download and keep pvc persistent (with size / storageClass / etc
provide a bootstrap script to download local model from another llama url should be nice (ml engineer) can push their model into this pvc easily
### What is the impact of not solving this?
very slow start and can't custom easily ollama with different model
### Anything else?
should be happy to help you to do that.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3279/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/7093
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7093/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7093/comments
|
https://api.github.com/repos/ollama/ollama/issues/7093/events
|
https://github.com/ollama/ollama/pull/7093
| 2,564,806,421
|
PR_kwDOJ0Z1Ps59iqLG
| 7,093
|
Adding reference to Promptery (Ollama client) to README.md
|
{
"login": "promptery",
"id": 180473354,
"node_id": "U_kgDOCsHOCg",
"avatar_url": "https://avatars.githubusercontent.com/u/180473354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/promptery",
"html_url": "https://github.com/promptery",
"followers_url": "https://api.github.com/users/promptery/followers",
"following_url": "https://api.github.com/users/promptery/following{/other_user}",
"gists_url": "https://api.github.com/users/promptery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/promptery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/promptery/subscriptions",
"organizations_url": "https://api.github.com/users/promptery/orgs",
"repos_url": "https://api.github.com/users/promptery/repos",
"events_url": "https://api.github.com/users/promptery/events{/privacy}",
"received_events_url": "https://api.github.com/users/promptery/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-10-03T19:03:34
| 2024-11-21T09:46:20
| 2024-11-21T09:46:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7093",
"html_url": "https://github.com/ollama/ollama/pull/7093",
"diff_url": "https://github.com/ollama/ollama/pull/7093.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7093.patch",
"merged_at": "2024-11-21T09:46:20"
}
|
Added link and minimal description for Promptery (https://github.com/promptery/promptery)
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7093/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5269
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5269/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5269/comments
|
https://api.github.com/repos/ollama/ollama/issues/5269/events
|
https://github.com/ollama/ollama/issues/5269
| 2,371,730,593
|
I_kwDOJ0Z1Ps6NXbyh
| 5,269
|
Interesting behavior when running in parallel
|
{
"login": "AI-Guru",
"id": 32195399,
"node_id": "MDQ6VXNlcjMyMTk1Mzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/32195399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI-Guru",
"html_url": "https://github.com/AI-Guru",
"followers_url": "https://api.github.com/users/AI-Guru/followers",
"following_url": "https://api.github.com/users/AI-Guru/following{/other_user}",
"gists_url": "https://api.github.com/users/AI-Guru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI-Guru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI-Guru/subscriptions",
"organizations_url": "https://api.github.com/users/AI-Guru/orgs",
"repos_url": "https://api.github.com/users/AI-Guru/repos",
"events_url": "https://api.github.com/users/AI-Guru/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI-Guru/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-06-25T06:05:15
| 2024-07-24T19:04:43
| 2024-07-24T19:04:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi!
I am trying to embed a lot of documents with `jina/jina-embeddings-v2-base-de`. I am trying to use request parallelism with `OLLAMA_NUM_PARALLEL=16`. I have a multiprocessing script that spawns 16 processes. When watching the Ollama output I observe the following.
- A lot of request got through fine.
- Then suddenly an assertion is violates. See below.
- Ollama then purges the model from GPU memory and reloads it.
- My processes continue. I have some retry-mechanism.
Here is the assertion's output:
`GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/llama.cpp:12063: seq_id < n_tokens && "seq_id cannot be larger than n_tokens with pooling_type == MEAN"
`
Any ideas?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.45
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5269/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1826
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1826/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1826/comments
|
https://api.github.com/repos/ollama/ollama/issues/1826/events
|
https://github.com/ollama/ollama/issues/1826
| 2,068,726,008
|
I_kwDOJ0Z1Ps57TkD4
| 1,826
|
MacOS: Ollama ignores changes to the iogpu.wired_limit_mb tunable when deciding whether to run on GPU or CPU
|
{
"login": "easp",
"id": 414705,
"node_id": "MDQ6VXNlcjQxNDcwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/414705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/easp",
"html_url": "https://github.com/easp",
"followers_url": "https://api.github.com/users/easp/followers",
"following_url": "https://api.github.com/users/easp/following{/other_user}",
"gists_url": "https://api.github.com/users/easp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/easp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/easp/subscriptions",
"organizations_url": "https://api.github.com/users/easp/orgs",
"repos_url": "https://api.github.com/users/easp/repos",
"events_url": "https://api.github.com/users/easp/events{/privacy}",
"received_events_url": "https://api.github.com/users/easp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 14
| 2024-01-06T17:07:06
| 2024-05-10T01:00:54
| 2024-05-10T01:00:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
MacOS 14.2.1 on a 32GB M1 Max MBP
```
% ollama run dolphin-mixtral:8x7b-v2.7-q3_K_M
Error: model requires at least 48 GB of memory
```
This error appears immediately, it does not seem to try to load the model.
I tried pulling the model again. Same behavior. I've been running this model without issue on 0.1.17.
I tried upping the memory MacOS makes available to the GPU but it didn't help
`sudo sysctl iogpu.wired_limit_mb=26624`
Also an issue with mixtral:8x7b-instruct-v0.1-q3_K_M. nous-hermes2:34b-yi-q3_K_M runs, as does nous-hermes2:34b.
On 0.1.18, nous-hermes2:34b's memory requirements, according to final `ggml_metal_add_buffer:` entry in the log, is
19675.33 MB and 21845.34 MB are available to the GPU
On 0.1.17, dolphin-mixtral:8x7b-v2.7-q3_K_M's
19964.30 MB
On 0.1.17 mixtral:8x7b-instruct-v0.1-q3_K_M:
19965.17 MB
So, 0.1.18 runs a model that seems to require more memory than the q3_K_M mixtral variants that it refuses to run.
Has the memory requirement for the mixtral models increased dramatically in 0.1.18, or is this new feature of estimating and enforcing memory requirements causing problems?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1826/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1826/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/368
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/368/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/368/comments
|
https://api.github.com/repos/ollama/ollama/issues/368/events
|
https://github.com/ollama/ollama/pull/368
| 1,854,235,527
|
PR_kwDOJ0Z1Ps5YHffC
| 368
|
set the scopes correctly
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-17T04:39:29
| 2023-08-17T04:42:03
| 2023-08-17T04:42:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/368",
"html_url": "https://github.com/ollama/ollama/pull/368",
"diff_url": "https://github.com/ollama/ollama/pull/368.diff",
"patch_url": "https://github.com/ollama/ollama/pull/368.patch",
"merged_at": "2023-08-17T04:42:02"
}
|
This change fixes the scope authorization to allow cross-repo pushes to work correctly.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/368/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/641
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/641/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/641/comments
|
https://api.github.com/repos/ollama/ollama/issues/641/events
|
https://github.com/ollama/ollama/pull/641
| 1,918,325,224
|
PR_kwDOJ0Z1Ps5bfMWV
| 641
|
allow the user to cancel generating with ctrl-C
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-28T21:56:55
| 2023-09-29T00:13:02
| 2023-09-29T00:13:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/641",
"html_url": "https://github.com/ollama/ollama/pull/641",
"diff_url": "https://github.com/ollama/ollama/pull/641.diff",
"patch_url": "https://github.com/ollama/ollama/pull/641.patch",
"merged_at": "2023-09-29T00:13:01"
}
|
The change allows the user to cancel generating using Ctrl-C in the REPL. It handles both the cases where the request is canceled before the stream starts back, as well as when the request is streaming.
It also changes the REPL so that you can't accidentally hit Ctrl-C and exit the REPL. Instead it will prompt the user to use Ctrl-D or `/bye` instead.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/641/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/828
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/828/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/828/comments
|
https://api.github.com/repos/ollama/ollama/issues/828/events
|
https://github.com/ollama/ollama/pull/828
| 1,948,413,543
|
PR_kwDOJ0Z1Ps5dEgcQ
| 828
|
image: show parameters
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-17T22:45:17
| 2023-10-19T16:31:32
| 2023-10-19T16:31:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/828",
"html_url": "https://github.com/ollama/ollama/pull/828",
"diff_url": "https://github.com/ollama/ollama/pull/828.diff",
"patch_url": "https://github.com/ollama/ollama/pull/828.patch",
"merged_at": "2023-10-19T16:31:31"
}
|
map options back into an any slice so the template can do the work of stringify the values
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/828/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/824
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/824/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/824/comments
|
https://api.github.com/repos/ollama/ollama/issues/824/events
|
https://github.com/ollama/ollama/pull/824
| 1,948,041,998
|
PR_kwDOJ0Z1Ps5dDOcO
| 824
|
fix MB VRAM log output
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-17T18:48:25
| 2023-10-17T19:35:17
| 2023-10-17T19:35:16
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/824",
"html_url": "https://github.com/ollama/ollama/pull/824",
"diff_url": "https://github.com/ollama/ollama/pull/824.diff",
"patch_url": "https://github.com/ollama/ollama/pull/824.patch",
"merged_at": "2023-10-17T19:35:16"
}
|
This was logging the bytes:
```
15651045376 MiB VRAM available, loading up to 29 GPU layers
```
Fix:
```
14926 MB VRAM available, loading up to 29 GPU layers
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/824/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4229
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4229/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4229/comments
|
https://api.github.com/repos/ollama/ollama/issues/4229/events
|
https://github.com/ollama/ollama/issues/4229
| 2,283,502,510
|
I_kwDOJ0Z1Ps6IG3uu
| 4,229
|
Cannot run model with noexec /tmp
|
{
"login": "jmbit",
"id": 161209930,
"node_id": "U_kgDOCZveSg",
"avatar_url": "https://avatars.githubusercontent.com/u/161209930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmbit",
"html_url": "https://github.com/jmbit",
"followers_url": "https://api.github.com/users/jmbit/followers",
"following_url": "https://api.github.com/users/jmbit/following{/other_user}",
"gists_url": "https://api.github.com/users/jmbit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmbit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmbit/subscriptions",
"organizations_url": "https://api.github.com/users/jmbit/orgs",
"repos_url": "https://api.github.com/users/jmbit/repos",
"events_url": "https://api.github.com/users/jmbit/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmbit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-07T14:22:17
| 2024-05-07T18:48:08
| 2024-05-07T18:48:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
For security reason, I have set /tmp to nodev,nosuid,noexec.
However, this means I can't run ollama models:
```
~ $ ollama run llama3
Error: error starting the external llama server: fork/exec /tmp/ollama2604787016/runners/cpu_avx2/ollama_llama_server: permission denied
```
```fstab
# /etc/fstab: static file system information.
...
# <file system> <mount point> <type> <options> <dump> <pass>
...
# shm, tmp
tmpfs /dev/shm tmpfs defaults,nodev,nosuid,noexec 0 0
tmpfs /tmp tmpfs defaults,nodev,nosuid,noexec 0 0
```
Is there any way to set the execution directory for ollama?
### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.33
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4229/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3187
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3187/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3187/comments
|
https://api.github.com/repos/ollama/ollama/issues/3187/events
|
https://github.com/ollama/ollama/issues/3187
| 2,190,280,242
|
I_kwDOJ0Z1Ps6CjQYy
| 3,187
|
How do you install the ollama gui and terminal executable from command line without manually installing it?
|
{
"login": "shyamalschandra",
"id": 9545735,
"node_id": "MDQ6VXNlcjk1NDU3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9545735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shyamalschandra",
"html_url": "https://github.com/shyamalschandra",
"followers_url": "https://api.github.com/users/shyamalschandra/followers",
"following_url": "https://api.github.com/users/shyamalschandra/following{/other_user}",
"gists_url": "https://api.github.com/users/shyamalschandra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shyamalschandra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shyamalschandra/subscriptions",
"organizations_url": "https://api.github.com/users/shyamalschandra/orgs",
"repos_url": "https://api.github.com/users/shyamalschandra/repos",
"events_url": "https://api.github.com/users/shyamalschandra/events{/privacy}",
"received_events_url": "https://api.github.com/users/shyamalschandra/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 6
| 2024-03-16T22:16:26
| 2024-04-15T20:05:42
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
Automating the process of using the ollama package without going through the manual processing of installing it every time.
### How should we solve this?
Make a brew that handles this -- brew install ollama -- is not enough.
### What is the impact of not solving this?
It is going to cripple your users.
### Anything else?
Please add more tuning parameters and hooks for developers from CLI and other languages like Swift and Python.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3187/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5201
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5201/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5201/comments
|
https://api.github.com/repos/ollama/ollama/issues/5201/events
|
https://github.com/ollama/ollama/issues/5201
| 2,366,740,716
|
I_kwDOJ0Z1Ps6NEZjs
| 5,201
|
Feature Request: Support for Meta Chameleon
|
{
"login": "PaulCapestany",
"id": 458245,
"node_id": "MDQ6VXNlcjQ1ODI0NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulCapestany",
"html_url": "https://github.com/PaulCapestany",
"followers_url": "https://api.github.com/users/PaulCapestany/followers",
"following_url": "https://api.github.com/users/PaulCapestany/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulCapestany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulCapestany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulCapestany/subscriptions",
"organizations_url": "https://api.github.com/users/PaulCapestany/orgs",
"repos_url": "https://api.github.com/users/PaulCapestany/repos",
"events_url": "https://api.github.com/users/PaulCapestany/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulCapestany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-06-21T15:09:09
| 2024-07-07T06:41:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[Once llama.cpp gets Chameleon support](https://github.com/ggerganov/llama.cpp/issues/7995) it'd be great if ollama could incorporate it as well.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5201/reactions",
"total_count": 20,
"+1": 18,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5201/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/370
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/370/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/370/comments
|
https://api.github.com/repos/ollama/ollama/issues/370/events
|
https://github.com/ollama/ollama/issues/370
| 1,855,222,028
|
I_kwDOJ0Z1Ps5ulHEM
| 370
|
dynamically allocate num_gpu
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-08-17T15:21:47
| 2023-09-26T22:51:29
| 2023-09-26T22:51:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When running a model on a system with a gpu available dynamically set num_gpu to load a reasonable amount of layers to gpu (based on mem required).
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/370/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1795
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1795/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1795/comments
|
https://api.github.com/repos/ollama/ollama/issues/1795/events
|
https://github.com/ollama/ollama/issues/1795
| 2,066,603,552
|
I_kwDOJ0Z1Ps57Ld4g
| 1,795
|
Langchain Ollama: OAuth2 authentication and URL parameters
|
{
"login": "skye0402",
"id": 36907475,
"node_id": "MDQ6VXNlcjM2OTA3NDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/36907475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skye0402",
"html_url": "https://github.com/skye0402",
"followers_url": "https://api.github.com/users/skye0402/followers",
"following_url": "https://api.github.com/users/skye0402/following{/other_user}",
"gists_url": "https://api.github.com/users/skye0402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skye0402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skye0402/subscriptions",
"organizations_url": "https://api.github.com/users/skye0402/orgs",
"repos_url": "https://api.github.com/users/skye0402/repos",
"events_url": "https://api.github.com/users/skye0402/events{/privacy}",
"received_events_url": "https://api.github.com/users/skye0402/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-05T01:53:48
| 2024-01-08T18:57:51
| 2024-01-08T18:57:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**What this is about:**
Add OAuth2 and basic authentication to the langchain Ollama libraries as well as flexible URLs and ports.
**Why:**
Not everyone runs Ollama on the local machine. As for me I run it on Kubernetes and use it always with its langchain library. For that proper authentication is required.
**How:**
I propose to keep Ollma "as-is" and let the wrapping platform define the authentication. That way, only the langchain components need enhancement to offer OAuth or basic authentication through parameters (".env").
**Status:**
I've already enhanced the Ollma libraries to use OAuth2 with Client Credentials. I'm happy to add Basic to it as well if there is interest to add the code to the main langchain libraries.
I'm talking about these classes:
- ChatOllama
- Ollama
- OllamaEmbeddings
Let me know if/ how I can contribute my code to it.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1795/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/3313
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3313/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3313/comments
|
https://api.github.com/repos/ollama/ollama/issues/3313/events
|
https://github.com/ollama/ollama/issues/3313
| 2,203,951,113
|
I_kwDOJ0Z1Ps6DXaAJ
| 3,313
|
Model search in website is filtered only using name of the model
|
{
"login": "Philotheephilix",
"id": 110274378,
"node_id": "U_kgDOBpKnSg",
"avatar_url": "https://avatars.githubusercontent.com/u/110274378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Philotheephilix",
"html_url": "https://github.com/Philotheephilix",
"followers_url": "https://api.github.com/users/Philotheephilix/followers",
"following_url": "https://api.github.com/users/Philotheephilix/following{/other_user}",
"gists_url": "https://api.github.com/users/Philotheephilix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Philotheephilix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Philotheephilix/subscriptions",
"organizations_url": "https://api.github.com/users/Philotheephilix/orgs",
"repos_url": "https://api.github.com/users/Philotheephilix/repos",
"events_url": "https://api.github.com/users/Philotheephilix/events{/privacy}",
"received_events_url": "https://api.github.com/users/Philotheephilix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-03-23T16:31:06
| 2024-04-02T19:14:25
| 2024-04-02T15:52:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Searching for a model on https://ollama.com/ filters request based only on the **name** of the model
it practically makes the search box useless as noone can find the model they want as the keywords will be present in the **description** of the model
### What did you expect to see?
Implementing **name and description** based sorting in search box suggestions
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
x86
### Platform
WSL2
### Ollama version
website frontend
### GPU
Other
### GPU info
_No response_
### CPU
Intel
### Other software
_No response_
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3313/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3313/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1061
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1061/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1061/comments
|
https://api.github.com/repos/ollama/ollama/issues/1061/events
|
https://github.com/ollama/ollama/pull/1061
| 1,986,300,456
|
PR_kwDOJ0Z1Ps5fEiRs
| 1,061
|
document specifying multiple stop params
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-09T19:51:13
| 2023-11-09T21:16:27
| 2023-11-09T21:16:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1061",
"html_url": "https://github.com/ollama/ollama/pull/1061",
"diff_url": "https://github.com/ollama/ollama/pull/1061.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1061.patch",
"merged_at": "2023-11-09T21:16:26"
}
|
resolves #572
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1061/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7500
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7500/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7500/comments
|
https://api.github.com/repos/ollama/ollama/issues/7500/events
|
https://github.com/ollama/ollama/pull/7500
| 2,634,217,464
|
PR_kwDOJ0Z1Ps6A3tFQ
| 7,500
|
prompt: Use a single token when estimating mllama context size
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-05T01:40:17
| 2024-11-05T18:11:53
| 2024-11-05T18:11:51
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7500",
"html_url": "https://github.com/ollama/ollama/pull/7500",
"diff_url": "https://github.com/ollama/ollama/pull/7500.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7500.patch",
"merged_at": "2024-11-05T18:11:51"
}
|
Currently we assume that images take 768 tokens of context size for the purposes of clipping old messages that exceed the context window. However, our mllama implementation stores the full image embedding in a single token. As a result, there is significant waste of context space.
Ideally, we would handle this more generically and have the implementation report the number of tokens. However, at the moment this would just result in a similar set of 'if' conditions in the runner plus APIs to report it back. So for now, we just keep this simple.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7500/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/298
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/298/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/298/comments
|
https://api.github.com/repos/ollama/ollama/issues/298/events
|
https://github.com/ollama/ollama/pull/298
| 1,838,179,338
|
PR_kwDOJ0Z1Ps5XRO8m
| 298
|
add token rate in verbose mode
|
{
"login": "canzden",
"id": 36564369,
"node_id": "MDQ6VXNlcjM2NTY0MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/36564369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/canzden",
"html_url": "https://github.com/canzden",
"followers_url": "https://api.github.com/users/canzden/followers",
"following_url": "https://api.github.com/users/canzden/following{/other_user}",
"gists_url": "https://api.github.com/users/canzden/gists{/gist_id}",
"starred_url": "https://api.github.com/users/canzden/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/canzden/subscriptions",
"organizations_url": "https://api.github.com/users/canzden/orgs",
"repos_url": "https://api.github.com/users/canzden/repos",
"events_url": "https://api.github.com/users/canzden/events{/privacy}",
"received_events_url": "https://api.github.com/users/canzden/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-08-06T11:57:55
| 2023-09-30T03:59:03
| 2023-09-30T03:59:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/298",
"html_url": "https://github.com/ollama/ollama/pull/298",
"diff_url": "https://github.com/ollama/ollama/pull/298.diff",
"patch_url": "https://github.com/ollama/ollama/pull/298.patch",
"merged_at": null
}
|
#293
add token rate in verbose mode
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/298/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/298/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6964
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6964/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6964/comments
|
https://api.github.com/repos/ollama/ollama/issues/6964/events
|
https://github.com/ollama/ollama/issues/6964
| 2,549,036,432
|
I_kwDOJ0Z1Ps6X7zWQ
| 6,964
|
Please add OrdalieTech/Solon-embeddings-large-0.1 / OrdalieTech/Solon-embeddings-base-0.1
|
{
"login": "gjactat",
"id": 1826064,
"node_id": "MDQ6VXNlcjE4MjYwNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1826064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gjactat",
"html_url": "https://github.com/gjactat",
"followers_url": "https://api.github.com/users/gjactat/followers",
"following_url": "https://api.github.com/users/gjactat/following{/other_user}",
"gists_url": "https://api.github.com/users/gjactat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gjactat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gjactat/subscriptions",
"organizations_url": "https://api.github.com/users/gjactat/orgs",
"repos_url": "https://api.github.com/users/gjactat/repos",
"events_url": "https://api.github.com/users/gjactat/events{/privacy}",
"received_events_url": "https://api.github.com/users/gjactat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-09-25T21:32:29
| 2024-10-23T23:07:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Best French embedding models. By far.
https://huggingface.co/OrdalieTech/Solon-embeddings-base-0.1
https://huggingface.co/OrdalieTech/Solon-embeddings-large-0.1
https://ordalie.ai/research/solon
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6964/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5296
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5296/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5296/comments
|
https://api.github.com/repos/ollama/ollama/issues/5296/events
|
https://github.com/ollama/ollama/issues/5296
| 2,375,010,336
|
I_kwDOJ0Z1Ps6Nj8gg
| 5,296
|
upgrade attempt failed
|
{
"login": "NeilWang079",
"id": 42379975,
"node_id": "MDQ6VXNlcjQyMzc5OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/42379975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NeilWang079",
"html_url": "https://github.com/NeilWang079",
"followers_url": "https://api.github.com/users/NeilWang079/followers",
"following_url": "https://api.github.com/users/NeilWang079/following{/other_user}",
"gists_url": "https://api.github.com/users/NeilWang079/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NeilWang079/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeilWang079/subscriptions",
"organizations_url": "https://api.github.com/users/NeilWang079/orgs",
"repos_url": "https://api.github.com/users/NeilWang079/repos",
"events_url": "https://api.github.com/users/NeilWang079/events{/privacy}",
"received_events_url": "https://api.github.com/users/NeilWang079/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-06-26T10:50:49
| 2024-06-28T03:21:00
| 2024-06-28T03:21:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
time=2024-06-26T18:25:20.752+08:00 level=INFO source=updater.go:102 msg="New update available at https://github.com/ollama/ollama/releases/download/v0.1.46/OllamaSetup.exe"
time=2024-06-26T18:25:42.556+08:00 level=ERROR source=updater.go:212 msg="failed to download new release: error checking update: Get \"https://github.com/ollama/ollama/releases/download/v0.1.46/OllamaSetup.exe\": read tcp 192.168.31.139:3633->20.205.243.166:443: wsarecv: An established connection was aborted by the software in your host machine."
time=2024-06-26T18:25:50.338+08:00 level=WARN source=lifecycle.go:44 msg="upgrade attempt failed: no update downloads found"
time=2024-06-26T18:42:36.669+08:00 level=WARN source=lifecycle.go:44 msg="upgrade attempt failed: no update downloads found"
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
v0.1.45
|
{
"login": "NeilWang079",
"id": 42379975,
"node_id": "MDQ6VXNlcjQyMzc5OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/42379975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NeilWang079",
"html_url": "https://github.com/NeilWang079",
"followers_url": "https://api.github.com/users/NeilWang079/followers",
"following_url": "https://api.github.com/users/NeilWang079/following{/other_user}",
"gists_url": "https://api.github.com/users/NeilWang079/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NeilWang079/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeilWang079/subscriptions",
"organizations_url": "https://api.github.com/users/NeilWang079/orgs",
"repos_url": "https://api.github.com/users/NeilWang079/repos",
"events_url": "https://api.github.com/users/NeilWang079/events{/privacy}",
"received_events_url": "https://api.github.com/users/NeilWang079/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5296/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7045
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7045/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7045/comments
|
https://api.github.com/repos/ollama/ollama/issues/7045/events
|
https://github.com/ollama/ollama/issues/7045
| 2,556,611,441
|
I_kwDOJ0Z1Ps6YYstx
| 7,045
|
An option to record conversations via terminal
|
{
"login": "ronxldwilson",
"id": 57818133,
"node_id": "MDQ6VXNlcjU3ODE4MTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/57818133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ronxldwilson",
"html_url": "https://github.com/ronxldwilson",
"followers_url": "https://api.github.com/users/ronxldwilson/followers",
"following_url": "https://api.github.com/users/ronxldwilson/following{/other_user}",
"gists_url": "https://api.github.com/users/ronxldwilson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ronxldwilson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ronxldwilson/subscriptions",
"organizations_url": "https://api.github.com/users/ronxldwilson/orgs",
"repos_url": "https://api.github.com/users/ronxldwilson/repos",
"events_url": "https://api.github.com/users/ronxldwilson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ronxldwilson/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-30T13:01:20
| 2024-10-02T01:12:22
| 2024-10-01T23:01:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Would like to request for the addition of feature which allows the user to record the conversations happening with the model in the file system.
Example
> ollama run <model_name> record
Giving the record argument, it would record all the chats that are happening in the session in a text file which can be useful to go back to previous conversations with the model.
Example
> ollama run <model_name> record "<filename>.txt"
this could allow to record with a certain name or in certain path.
Output
> <date and time>.txt
This conversations can be stored in a certain predefined path. Along with any logs and error messages that come up can be stored in the same file.
Recording of the conversation chat could also allow to browse through the old conversations.
> ollama browse
this can allow to revisit the old chats in a sequential order and make the work of browsing through old chats and possibly resume a chat from the previous session.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7045/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7464
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7464/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7464/comments
|
https://api.github.com/repos/ollama/ollama/issues/7464/events
|
https://github.com/ollama/ollama/issues/7464
| 2,629,070,850
|
I_kwDOJ0Z1Ps6ctHAC
| 7,464
|
Pointer error on latest RC: unsafe.Slice: ptr is nil and len is not zero - llama.go:348
|
{
"login": "samchouse",
"id": 46873232,
"node_id": "MDQ6VXNlcjQ2ODczMjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/46873232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samchouse",
"html_url": "https://github.com/samchouse",
"followers_url": "https://api.github.com/users/samchouse/followers",
"following_url": "https://api.github.com/users/samchouse/following{/other_user}",
"gists_url": "https://api.github.com/users/samchouse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samchouse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samchouse/subscriptions",
"organizations_url": "https://api.github.com/users/samchouse/orgs",
"repos_url": "https://api.github.com/users/samchouse/repos",
"events_url": "https://api.github.com/users/samchouse/events{/privacy}",
"received_events_url": "https://api.github.com/users/samchouse/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-11-01T13:45:25
| 2024-11-05T12:48:51
| 2024-11-02T20:37:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Trying to run the new vision model produces a pointer error. I'm using the latest Docker RC tag.
[ollama.log](https://github.com/user-attachments/files/17600147/ollama.log)
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.4.0-rc6 Warning: client version is 0.3.12
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7464/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6230
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6230/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6230/comments
|
https://api.github.com/repos/ollama/ollama/issues/6230/events
|
https://github.com/ollama/ollama/issues/6230
| 2,453,134,700
|
I_kwDOJ0Z1Ps6SN91s
| 6,230
|
Add Generate Embedding for Sparse vector
|
{
"login": "shashade2012",
"id": 22316457,
"node_id": "MDQ6VXNlcjIyMzE2NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/22316457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashade2012",
"html_url": "https://github.com/shashade2012",
"followers_url": "https://api.github.com/users/shashade2012/followers",
"following_url": "https://api.github.com/users/shashade2012/following{/other_user}",
"gists_url": "https://api.github.com/users/shashade2012/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashade2012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashade2012/subscriptions",
"organizations_url": "https://api.github.com/users/shashade2012/orgs",
"repos_url": "https://api.github.com/users/shashade2012/repos",
"events_url": "https://api.github.com/users/shashade2012/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashade2012/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 4
| 2024-08-07T10:26:27
| 2024-11-14T15:22:53
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I find ollama already support bge-m3,
because bge-m3 can generate sparse vector. Is there any way for generate sparse embeddings?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6230/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6230/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4463
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4463/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4463/comments
|
https://api.github.com/repos/ollama/ollama/issues/4463/events
|
https://github.com/ollama/ollama/pull/4463
| 2,299,066,045
|
PR_kwDOJ0Z1Ps5vmE_t
| 4,463
|
changed line display to be calculated with runewidth
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-16T00:16:04
| 2024-05-16T21:15:09
| 2024-05-16T21:15:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4463",
"html_url": "https://github.com/ollama/ollama/pull/4463",
"diff_url": "https://github.com/ollama/ollama/pull/4463.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4463.patch",
"merged_at": "2024-05-16T21:15:09"
}
|
Calculates line length using `runewidth` instead of `len(string)` as `len(string)` produces the number of bytes in that string (often resulting in an incorrect calculation of display length for multi-width runes). Now cuts lines accordingly for multi-byte characters (Russian, latin, etc) as well as word-wraps for multi-width characters (Chinese, Japanese, Korean)
Old Russian:
<img width="560" alt="Screenshot 2024-05-15 at 5 05 42 PM" src="https://github.com/ollama/ollama/assets/76125168/50bda24b-7fff-4cc9-8e85-6f1b4f006a65">
New: Russian
<img width="555" alt="Screenshot 2024-05-15 at 5 01 21 PM" src="https://github.com/ollama/ollama/assets/76125168/a664d1f0-c321-4993-bad7-2c2e066e6a10">
Old: Chinese
<img width="576" alt="Screenshot 2024-05-15 at 5 10 09 PM" src="https://github.com/ollama/ollama/assets/76125168/5d47f924-9467-4f3b-ba1d-515072f875f3">
New: Chinese
<img width="572" alt="Screenshot 2024-05-15 at 5 15 12 PM" src="https://github.com/ollama/ollama/assets/76125168/275e39dd-c89f-4545-9323-0aba1c8060e2">
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4463/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/46
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/46/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/46/comments
|
https://api.github.com/repos/ollama/ollama/issues/46/events
|
https://github.com/ollama/ollama/pull/46
| 1,792,117,913
|
PR_kwDOJ0Z1Ps5U1zQm
| 46
|
Go simple response
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-06T19:36:16
| 2023-07-06T19:38:51
| 2023-07-06T19:38:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/46",
"html_url": "https://github.com/ollama/ollama/pull/46",
"diff_url": "https://github.com/ollama/ollama/pull/46.diff",
"patch_url": "https://github.com/ollama/ollama/pull/46.patch",
"merged_at": "2023-07-06T19:38:47"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/46/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/46/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2830
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2830/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2830/comments
|
https://api.github.com/repos/ollama/ollama/issues/2830/events
|
https://github.com/ollama/ollama/issues/2830
| 2,160,944,421
|
I_kwDOJ0Z1Ps6AzWUl
| 2,830
|
Seeking Information on the Origin of ollama Models
|
{
"login": "aaronyy9",
"id": 54395520,
"node_id": "MDQ6VXNlcjU0Mzk1NTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/54395520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronyy9",
"html_url": "https://github.com/aaronyy9",
"followers_url": "https://api.github.com/users/aaronyy9/followers",
"following_url": "https://api.github.com/users/aaronyy9/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronyy9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronyy9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronyy9/subscriptions",
"organizations_url": "https://api.github.com/users/aaronyy9/orgs",
"repos_url": "https://api.github.com/users/aaronyy9/repos",
"events_url": "https://api.github.com/users/aaronyy9/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronyy9/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2024-02-29T10:39:19
| 2024-04-23T08:31:52
| 2024-03-01T04:35:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Who can tell me where the models in ollama are downloaded from? For example, is gemma:7b-instruct-fp16, as in ollama run gemma:7b-instruct-fp16, sourced from Hugging Face? If so, who can tell me the specific source, or are the large models in ollama all newly quantized or fine-tuned by themselves?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2830/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2830/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4888
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4888/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4888/comments
|
https://api.github.com/repos/ollama/ollama/issues/4888/events
|
https://github.com/ollama/ollama/issues/4888
| 2,339,362,939
|
I_kwDOJ0Z1Ps6Lb9h7
| 4,888
|
Exists a way of implements authentication with api-key on Ollama Client?
|
{
"login": "claudiocassimiro",
"id": 65298393,
"node_id": "MDQ6VXNlcjY1Mjk4Mzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/65298393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/claudiocassimiro",
"html_url": "https://github.com/claudiocassimiro",
"followers_url": "https://api.github.com/users/claudiocassimiro/followers",
"following_url": "https://api.github.com/users/claudiocassimiro/following{/other_user}",
"gists_url": "https://api.github.com/users/claudiocassimiro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/claudiocassimiro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/claudiocassimiro/subscriptions",
"organizations_url": "https://api.github.com/users/claudiocassimiro/orgs",
"repos_url": "https://api.github.com/users/claudiocassimiro/repos",
"events_url": "https://api.github.com/users/claudiocassimiro/events{/privacy}",
"received_events_url": "https://api.github.com/users/claudiocassimiro/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-07T00:34:49
| 2024-06-09T17:39:55
| 2024-06-09T17:39:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
With openAi client we can make this:
```
client = OpenAI(
api_key=os.environ.get("OPENAI_API_KEY"),
)
```
why we can't make this with ollama client to?
Someone went make a Discord channel to implements something like this?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4888/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5686
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5686/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5686/comments
|
https://api.github.com/repos/ollama/ollama/issues/5686/events
|
https://github.com/ollama/ollama/pull/5686
| 2,407,308,733
|
PR_kwDOJ0Z1Ps51T9XQ
| 5,686
|
serve static files
|
{
"login": "1feralcat",
"id": 51179976,
"node_id": "MDQ6VXNlcjUxMTc5OTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/51179976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1feralcat",
"html_url": "https://github.com/1feralcat",
"followers_url": "https://api.github.com/users/1feralcat/followers",
"following_url": "https://api.github.com/users/1feralcat/following{/other_user}",
"gists_url": "https://api.github.com/users/1feralcat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1feralcat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1feralcat/subscriptions",
"organizations_url": "https://api.github.com/users/1feralcat/orgs",
"repos_url": "https://api.github.com/users/1feralcat/repos",
"events_url": "https://api.github.com/users/1feralcat/events{/privacy}",
"received_events_url": "https://api.github.com/users/1feralcat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-14T07:35:39
| 2024-07-14T07:38:46
| 2024-07-14T07:38:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5686",
"html_url": "https://github.com/ollama/ollama/pull/5686",
"diff_url": "https://github.com/ollama/ollama/pull/5686.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5686.patch",
"merged_at": null
}
| null |
{
"login": "1feralcat",
"id": 51179976,
"node_id": "MDQ6VXNlcjUxMTc5OTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/51179976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1feralcat",
"html_url": "https://github.com/1feralcat",
"followers_url": "https://api.github.com/users/1feralcat/followers",
"following_url": "https://api.github.com/users/1feralcat/following{/other_user}",
"gists_url": "https://api.github.com/users/1feralcat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1feralcat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1feralcat/subscriptions",
"organizations_url": "https://api.github.com/users/1feralcat/orgs",
"repos_url": "https://api.github.com/users/1feralcat/repos",
"events_url": "https://api.github.com/users/1feralcat/events{/privacy}",
"received_events_url": "https://api.github.com/users/1feralcat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5686/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4970
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4970/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4970/comments
|
https://api.github.com/repos/ollama/ollama/issues/4970/events
|
https://github.com/ollama/ollama/issues/4970
| 2,345,237,700
|
I_kwDOJ0Z1Ps6LyXzE
| 4,970
|
llama runner process has terminated: exit status 0xc0000022
|
{
"login": "qq775862807",
"id": 46477893,
"node_id": "MDQ6VXNlcjQ2NDc3ODkz",
"avatar_url": "https://avatars.githubusercontent.com/u/46477893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qq775862807",
"html_url": "https://github.com/qq775862807",
"followers_url": "https://api.github.com/users/qq775862807/followers",
"following_url": "https://api.github.com/users/qq775862807/following{/other_user}",
"gists_url": "https://api.github.com/users/qq775862807/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qq775862807/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qq775862807/subscriptions",
"organizations_url": "https://api.github.com/users/qq775862807/orgs",
"repos_url": "https://api.github.com/users/qq775862807/repos",
"events_url": "https://api.github.com/users/qq775862807/events{/privacy}",
"received_events_url": "https://api.github.com/users/qq775862807/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-11T02:31:57
| 2024-06-11T03:08:14
| 2024-06-11T03:08:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
models:qwen2;Llama 3
2024/06/11 10:55:01 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\EDY\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_TMPDIR:]"
time=2024-06-11T10:55:01.880+08:00 level=INFO source=images.go:740 msg="total blobs: 10"
time=2024-06-11T10:55:01.881+08:00 level=INFO source=images.go:747 msg="total unused blobs removed: 0"
time=2024-06-11T10:55:01.882+08:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.42)"
time=2024-06-11T10:55:01.882+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]"
time=2024-06-11T10:55:02.005+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-6ccc095a-61cb-a3f8-99a0-1a09a42f8770 library=cuda compute=8.6 driver=12.2 name="NVIDIA GeForce RTX 3050 OEM" total="8.0 GiB" available="7.0 GiB"
[GIN] 2024/06/11 - 10:55:02 | 200 | 512.1µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/06/11 - 10:55:02 | 200 | 1.5467ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/06/11 - 10:55:02 | 200 | 1.0364ms | 127.0.0.1 | POST "/api/show"
time=2024-06-11T10:55:03.695+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="7.0 GiB" memory.required.full="5.0 GiB" memory.required.partial="5.0 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-06-11T10:55:03.695+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="7.0 GiB" memory.required.full="5.0 GiB" memory.required.partial="5.0 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-06-11T10:55:03.701+08:00 level=INFO source=server.go:341 msg="starting llama server" cmd="C:\\Users\\EDY\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\EDY\\.ollama\\models\\blobs\\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 1 --port 54739"
time=2024-06-11T10:55:03.703+08:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-06-11T10:55:03.703+08:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding"
time=2024-06-11T10:55:03.703+08:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"
time=2024-06-11T10:55:03.954+08:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000022 "
[GIN] 2024/06/11 - 10:55:03 | 500 | 1.7530836s | 127.0.0.1 | POST "/api/chat"
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.41;0.1.42
|
{
"login": "qq775862807",
"id": 46477893,
"node_id": "MDQ6VXNlcjQ2NDc3ODkz",
"avatar_url": "https://avatars.githubusercontent.com/u/46477893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qq775862807",
"html_url": "https://github.com/qq775862807",
"followers_url": "https://api.github.com/users/qq775862807/followers",
"following_url": "https://api.github.com/users/qq775862807/following{/other_user}",
"gists_url": "https://api.github.com/users/qq775862807/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qq775862807/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qq775862807/subscriptions",
"organizations_url": "https://api.github.com/users/qq775862807/orgs",
"repos_url": "https://api.github.com/users/qq775862807/repos",
"events_url": "https://api.github.com/users/qq775862807/events{/privacy}",
"received_events_url": "https://api.github.com/users/qq775862807/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4970/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5076
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5076/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5076/comments
|
https://api.github.com/repos/ollama/ollama/issues/5076/events
|
https://github.com/ollama/ollama/pull/5076
| 2,355,375,534
|
PR_kwDOJ0Z1Ps5ylt8g
| 5,076
|
gpu: add env var for detecting Intel oneapi gpus
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-16T01:53:48
| 2024-06-17T00:09:06
| 2024-06-17T00:09:05
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5076",
"html_url": "https://github.com/ollama/ollama/pull/5076",
"diff_url": "https://github.com/ollama/ollama/pull/5076.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5076.patch",
"merged_at": "2024-06-17T00:09:05"
}
|
Fixes https://github.com/ollama/ollama/issues/5073 until we can find the root cause
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5076/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6105
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6105/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6105/comments
|
https://api.github.com/repos/ollama/ollama/issues/6105/events
|
https://github.com/ollama/ollama/issues/6105
| 2,440,980,484
|
I_kwDOJ0Z1Ps6RfmgE
| 6,105
|
Ollama not using GPU (AMD)
|
{
"login": "theogbob",
"id": 168785618,
"node_id": "U_kgDOCg920g",
"avatar_url": "https://avatars.githubusercontent.com/u/168785618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theogbob",
"html_url": "https://github.com/theogbob",
"followers_url": "https://api.github.com/users/theogbob/followers",
"following_url": "https://api.github.com/users/theogbob/following{/other_user}",
"gists_url": "https://api.github.com/users/theogbob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theogbob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theogbob/subscriptions",
"organizations_url": "https://api.github.com/users/theogbob/orgs",
"repos_url": "https://api.github.com/users/theogbob/repos",
"events_url": "https://api.github.com/users/theogbob/events{/privacy}",
"received_events_url": "https://api.github.com/users/theogbob/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-31T21:26:34
| 2024-07-31T21:36:14
| 2024-07-31T21:36:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Im using ollama and I have an Ryzen 9 7900X3D and a Radeon RX 7900XTX, I have rocm installed:
rocm-smi output:
```============================================ ROCm System Management Interface ============================================
====================================================== Concise Info ======================================================
Device Node IDs Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU%
(DID, GUID) (Edge) (Avg) (Mem, Compute, ID)
==========================================================================================================================
0 1 0x744c, 58112 31.0°C 21.0W N/A, N/A, 0 137Mhz 96Mhz 0% auto 303.0W 5% 7%
1 2 0x164e, 26843 42.0°C 43.199W N/A, N/A, 0 None 3000Mhz 0% auto Unsupported 4% 0%
==========================================================================================================================
================================================== End of ROCm SMI Log ===================================================
```
The issue is ollama is using my CPU and RAM and never uses my GPU.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
reporting as 0.0.0?
|
{
"login": "theogbob",
"id": 168785618,
"node_id": "U_kgDOCg920g",
"avatar_url": "https://avatars.githubusercontent.com/u/168785618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theogbob",
"html_url": "https://github.com/theogbob",
"followers_url": "https://api.github.com/users/theogbob/followers",
"following_url": "https://api.github.com/users/theogbob/following{/other_user}",
"gists_url": "https://api.github.com/users/theogbob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theogbob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theogbob/subscriptions",
"organizations_url": "https://api.github.com/users/theogbob/orgs",
"repos_url": "https://api.github.com/users/theogbob/repos",
"events_url": "https://api.github.com/users/theogbob/events{/privacy}",
"received_events_url": "https://api.github.com/users/theogbob/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6105/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6105/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4733
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4733/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4733/comments
|
https://api.github.com/repos/ollama/ollama/issues/4733/events
|
https://github.com/ollama/ollama/pull/4733
| 2,326,668,752
|
PR_kwDOJ0Z1Ps5xEJrN
| 4,733
|
added IsValidNamespace function
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-30T23:03:53
| 2024-05-31T21:08:45
| 2024-05-31T21:08:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4733",
"html_url": "https://github.com/ollama/ollama/pull/4733",
"diff_url": "https://github.com/ollama/ollama/pull/4733.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4733.patch",
"merged_at": "2024-05-31T21:08:45"
}
|
added function to the package for purpose of validating new usernames on the website
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4733/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7307
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7307/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7307/comments
|
https://api.github.com/repos/ollama/ollama/issues/7307/events
|
https://github.com/ollama/ollama/issues/7307
| 2,604,041,923
|
I_kwDOJ0Z1Ps6bNobD
| 7,307
|
ollama run hf.co/* does not use Modelfile in repo
|
{
"login": "chrisbward",
"id": 888374,
"node_id": "MDQ6VXNlcjg4ODM3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/888374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisbward",
"html_url": "https://github.com/chrisbward",
"followers_url": "https://api.github.com/users/chrisbward/followers",
"following_url": "https://api.github.com/users/chrisbward/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisbward/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisbward/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisbward/subscriptions",
"organizations_url": "https://api.github.com/users/chrisbward/orgs",
"repos_url": "https://api.github.com/users/chrisbward/repos",
"events_url": "https://api.github.com/users/chrisbward/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisbward/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-10-22T02:13:35
| 2024-10-23T01:24:52
| 2024-10-23T01:24:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
```
➜ ~ ollama run hf.co/KOOWEEYUS/BlackSheep
>>> /show modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM hf.co/KOOWEEYUS/BlackSheep:latest
FROM /media/NAS/MLModels/02_LLMs/ollama_models/blobs/sha256-40db8db74cd91baeb90dbcbf799da57260dfca00d3ab6e40e26a05f73ad572ae
TEMPLATE {{ .Prompt }}
```
expected to see the model file from the repo being used
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.13
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7307/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8083
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8083/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8083/comments
|
https://api.github.com/repos/ollama/ollama/issues/8083/events
|
https://github.com/ollama/ollama/issues/8083
| 2,737,667,079
|
I_kwDOJ0Z1Ps6jLXwH
| 8,083
|
Is Llama 3.3 model actually Llama 3.1?
|
{
"login": "juangon",
"id": 1306127,
"node_id": "MDQ6VXNlcjEzMDYxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1306127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juangon",
"html_url": "https://github.com/juangon",
"followers_url": "https://api.github.com/users/juangon/followers",
"following_url": "https://api.github.com/users/juangon/following{/other_user}",
"gists_url": "https://api.github.com/users/juangon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juangon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juangon/subscriptions",
"organizations_url": "https://api.github.com/users/juangon/orgs",
"repos_url": "https://api.github.com/users/juangon/repos",
"events_url": "https://api.github.com/users/juangon/events{/privacy}",
"received_events_url": "https://api.github.com/users/juangon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-13T07:33:07
| 2024-12-13T08:47:09
| 2024-12-13T08:47:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi! When I was reading the logs while using llama3.3 (pulled with `ollama pull llama3.3`) I saw the model name was Llama3.1 (see screenshot).

Is this expected?
Thanks!
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.2-rc2
|
{
"login": "juangon",
"id": 1306127,
"node_id": "MDQ6VXNlcjEzMDYxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1306127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juangon",
"html_url": "https://github.com/juangon",
"followers_url": "https://api.github.com/users/juangon/followers",
"following_url": "https://api.github.com/users/juangon/following{/other_user}",
"gists_url": "https://api.github.com/users/juangon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juangon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juangon/subscriptions",
"organizations_url": "https://api.github.com/users/juangon/orgs",
"repos_url": "https://api.github.com/users/juangon/repos",
"events_url": "https://api.github.com/users/juangon/events{/privacy}",
"received_events_url": "https://api.github.com/users/juangon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8083/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8679
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8679/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8679/comments
|
https://api.github.com/repos/ollama/ollama/issues/8679/events
|
https://github.com/ollama/ollama/issues/8679
| 2,819,621,617
|
I_kwDOJ0Z1Ps6oEALx
| 8,679
|
AMD RX 6750 GPU not recognized by Ollama on Arch Linux despite HSA_OVERRIDE_GFX_VERSION
|
{
"login": "Guedxx",
"id": 148347673,
"node_id": "U_kgDOCNebGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/148347673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guedxx",
"html_url": "https://github.com/Guedxx",
"followers_url": "https://api.github.com/users/Guedxx/followers",
"following_url": "https://api.github.com/users/Guedxx/following{/other_user}",
"gists_url": "https://api.github.com/users/Guedxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guedxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guedxx/subscriptions",
"organizations_url": "https://api.github.com/users/Guedxx/orgs",
"repos_url": "https://api.github.com/users/Guedxx/repos",
"events_url": "https://api.github.com/users/Guedxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guedxx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-30T00:21:07
| 2025-01-30T00:31:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm running Arch Linux with an AMD RX 6750 GPU. Ollama fails to recognize my GPU as compatible, even after setting the Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0" environment variable. I've tried several steps to resolve the issue, but nothing has worked so far.
time=2025-01-29T21:15:25.499-03:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-29T21:15:25.520-03:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-01-29T21:15:25.522-03:00 level=WARN source=amd_linux.go:378 msg="amdgpu is not supported (supported types:[gfx1010 gfx1012 gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942])" gpu_type=gfx1031 gpu=0 library=/opt/rocm/lib
time=2025-01-29T21:15:25.522-03:00 level=WARN source=amd_linux.go:385 msg="See https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides for HSA_OVERRIDE_GFX_VERSION usage"
time=2025-01-29T21:15:25.522-03:00 level=INFO source=amd_linux.go:404 msg="no compatible amdgpu devices detected"
time=2025-01-29T21:15:25.522-03:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
### OS
Arch Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8679/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4577
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4577/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4577/comments
|
https://api.github.com/repos/ollama/ollama/issues/4577/events
|
https://github.com/ollama/ollama/issues/4577
| 2,310,929,787
|
I_kwDOJ0Z1Ps6Jvf17
| 4,577
|
raspberry pi 32bit userland - /usr/local/bin/ollama: cannot execute: required file not found
|
{
"login": "eliklein02",
"id": 72769850,
"node_id": "MDQ6VXNlcjcyNzY5ODUw",
"avatar_url": "https://avatars.githubusercontent.com/u/72769850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliklein02",
"html_url": "https://github.com/eliklein02",
"followers_url": "https://api.github.com/users/eliklein02/followers",
"following_url": "https://api.github.com/users/eliklein02/following{/other_user}",
"gists_url": "https://api.github.com/users/eliklein02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliklein02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliklein02/subscriptions",
"organizations_url": "https://api.github.com/users/eliklein02/orgs",
"repos_url": "https://api.github.com/users/eliklein02/repos",
"events_url": "https://api.github.com/users/eliklein02/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliklein02/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 16
| 2024-05-22T16:10:46
| 2024-06-07T07:00:58
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm trying to install ollama on a raspberry pi, I get that it'll be slow but I'm just playing around.
So running the curl command worked and it downloaded.
But when I run ollama run gemma or ollama pull gemma l get `-bash: /usr/local/bin/ollama: cannot execute: required file not found`
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4577/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5837
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5837/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5837/comments
|
https://api.github.com/repos/ollama/ollama/issues/5837/events
|
https://github.com/ollama/ollama/issues/5837
| 2,421,758,592
|
I_kwDOJ0Z1Ps6QWRqA
| 5,837
|
feature request: darwin service registration and sockets without CGO
|
{
"login": "gedw99",
"id": 53147028,
"node_id": "MDQ6VXNlcjUzMTQ3MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/53147028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gedw99",
"html_url": "https://github.com/gedw99",
"followers_url": "https://api.github.com/users/gedw99/followers",
"following_url": "https://api.github.com/users/gedw99/following{/other_user}",
"gists_url": "https://api.github.com/users/gedw99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gedw99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gedw99/subscriptions",
"organizations_url": "https://api.github.com/users/gedw99/orgs",
"repos_url": "https://api.github.com/users/gedw99/repos",
"events_url": "https://api.github.com/users/gedw99/events{/privacy}",
"received_events_url": "https://api.github.com/users/gedw99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-07-22T02:17:15
| 2024-07-22T02:17:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am using https://github.com/tprasadtp/go-launchd for Services on Mac.
It would be awesome if ollama could use this so we get Socket activation without CGO.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5837/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1723
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1723/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1723/comments
|
https://api.github.com/repos/ollama/ollama/issues/1723/events
|
https://github.com/ollama/ollama/issues/1723
| 2,056,843,368
|
I_kwDOJ0Z1Ps56mPBo
| 1,723
|
Ollama is not loading models from Symlinked folders
|
{
"login": "gerroon",
"id": 8519469,
"node_id": "MDQ6VXNlcjg1MTk0Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8519469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gerroon",
"html_url": "https://github.com/gerroon",
"followers_url": "https://api.github.com/users/gerroon/followers",
"following_url": "https://api.github.com/users/gerroon/following{/other_user}",
"gists_url": "https://api.github.com/users/gerroon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gerroon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerroon/subscriptions",
"organizations_url": "https://api.github.com/users/gerroon/orgs",
"repos_url": "https://api.github.com/users/gerroon/repos",
"events_url": "https://api.github.com/users/gerroon/events{/privacy}",
"received_events_url": "https://api.github.com/users/gerroon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-27T02:34:58
| 2023-12-28T22:14:15
| 2023-12-28T22:14:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi
Is it possible that OIlama is against symlinked that are coming from network drives? Is there a OS locked IO that would prevent such a thing? I am using WSL2 on Win 10, I am symlinking the `~/.ollama` folder to a network drive location since my VM drive is limited for all the models.
Ollama serve works but querying does not load any answers.
|
{
"login": "gerroon",
"id": 8519469,
"node_id": "MDQ6VXNlcjg1MTk0Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8519469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gerroon",
"html_url": "https://github.com/gerroon",
"followers_url": "https://api.github.com/users/gerroon/followers",
"following_url": "https://api.github.com/users/gerroon/following{/other_user}",
"gists_url": "https://api.github.com/users/gerroon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gerroon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerroon/subscriptions",
"organizations_url": "https://api.github.com/users/gerroon/orgs",
"repos_url": "https://api.github.com/users/gerroon/repos",
"events_url": "https://api.github.com/users/gerroon/events{/privacy}",
"received_events_url": "https://api.github.com/users/gerroon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1723/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/510
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/510/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/510/comments
|
https://api.github.com/repos/ollama/ollama/issues/510/events
|
https://github.com/ollama/ollama/issues/510
| 1,891,504,634
|
I_kwDOJ0Z1Ps5wvhH6
| 510
|
Images for Quants post
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-12T01:12:51
| 2023-09-12T01:12:56
| 2023-09-12T01:12:56
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |



|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/510/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4470
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4470/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4470/comments
|
https://api.github.com/repos/ollama/ollama/issues/4470/events
|
https://github.com/ollama/ollama/issues/4470
| 2,299,800,312
|
I_kwDOJ0Z1Ps6JFCr4
| 4,470
|
"ollama list" should display creation time, not download time
|
{
"login": "LaurentBonnaud",
"id": 2168323,
"node_id": "MDQ6VXNlcjIxNjgzMjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2168323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LaurentBonnaud",
"html_url": "https://github.com/LaurentBonnaud",
"followers_url": "https://api.github.com/users/LaurentBonnaud/followers",
"following_url": "https://api.github.com/users/LaurentBonnaud/following{/other_user}",
"gists_url": "https://api.github.com/users/LaurentBonnaud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LaurentBonnaud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LaurentBonnaud/subscriptions",
"organizations_url": "https://api.github.com/users/LaurentBonnaud/orgs",
"repos_url": "https://api.github.com/users/LaurentBonnaud/repos",
"events_url": "https://api.github.com/users/LaurentBonnaud/events{/privacy}",
"received_events_url": "https://api.github.com/users/LaurentBonnaud/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-05-16T09:05:43
| 2024-05-18T22:18:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
here is an example with the llama3 model:
```
$ ollama list
NAME ID SIZE MODIFIED
llama3:8b a6990ed6be41 4.7 GB 2 weeks ago
llama3:latest a6990ed6be41 4.7 GB 25 hours ago
```
The models are the same because they have the same ID. However, their "MODIFIED" column is different, which is surprising.
It would be more useful to display the date when the model was created/released instead of the download date.
Docker does this and I think that it makes more sense.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4470/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4470/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1491
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1491/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1491/comments
|
https://api.github.com/repos/ollama/ollama/issues/1491/events
|
https://github.com/ollama/ollama/issues/1491
| 2,038,583,193
|
I_kwDOJ0Z1Ps55gk-Z
| 1,491
|
feat: abstract cross platform server start/stop concerns
|
{
"login": "airtonix",
"id": 61225,
"node_id": "MDQ6VXNlcjYxMjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/61225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/airtonix",
"html_url": "https://github.com/airtonix",
"followers_url": "https://api.github.com/users/airtonix/followers",
"following_url": "https://api.github.com/users/airtonix/following{/other_user}",
"gists_url": "https://api.github.com/users/airtonix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/airtonix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/airtonix/subscriptions",
"organizations_url": "https://api.github.com/users/airtonix/orgs",
"repos_url": "https://api.github.com/users/airtonix/repos",
"events_url": "https://api.github.com/users/airtonix/events{/privacy}",
"received_events_url": "https://api.github.com/users/airtonix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 4
| 2023-12-12T21:34:51
| 2025-01-28T16:50:05
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Regarding the non obvious aspect of it daemonising:
Problems arising from it not being obvious that systemd was involved:
- https://github.com/jmorganca/ollama/issues/1152
- https://github.com/jmorganca/ollama/issues/1084
- https://github.com/jmorganca/ollama/issues/1391#issuecomment-1842125520
- https://github.com/jmorganca/ollama/issues/1018
- https://github.com/jmorganca/ollama/issues/727
- https://github.com/jmorganca/ollama/issues/707
Problems arising from lack of server control:
- https://github.com/jmorganca/ollama/issues/300
- https://github.com/jmorganca/ollama/issues/793
- https://github.com/jmorganca/ollama/issues/546
It looks like some of these have been closed as directing users to handle it via systemd.
```
systemctl stop ollama.service
```
Should this not be made obvious by an abstraction?
```
ollama server start --system
# linux: prompts for sudo to create system.d unit if it doesn't exist and start it
# mac: something something something steve jobs?
ollama server stop --system
# again, sudo required.
```
```
ollama server start
# no sudo required
# user presses: ctrl + d
# server stops.
```
_Originally posted by @airtonix in https://github.com/jmorganca/ollama/issues/690#issuecomment-1852844736_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1491/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8412
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8412/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8412/comments
|
https://api.github.com/repos/ollama/ollama/issues/8412/events
|
https://github.com/ollama/ollama/issues/8412
| 2,786,088,452
|
I_kwDOJ0Z1Ps6mEFYE
| 8,412
|
Link go examples on README
|
{
"login": "Fastidious",
"id": 8352292,
"node_id": "MDQ6VXNlcjgzNTIyOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8352292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fastidious",
"html_url": "https://github.com/Fastidious",
"followers_url": "https://api.github.com/users/Fastidious/followers",
"following_url": "https://api.github.com/users/Fastidious/following{/other_user}",
"gists_url": "https://api.github.com/users/Fastidious/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fastidious/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fastidious/subscriptions",
"organizations_url": "https://api.github.com/users/Fastidious/orgs",
"repos_url": "https://api.github.com/users/Fastidious/repos",
"events_url": "https://api.github.com/users/Fastidious/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fastidious/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2025-01-14T02:52:26
| 2025-01-14T02:56:45
| 2025-01-14T02:56:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Look to examples on README (https://github.com/ollama/ollama/blob/main/examples) renders a 404.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8412/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3962
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3962/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3962/comments
|
https://api.github.com/repos/ollama/ollama/issues/3962/events
|
https://github.com/ollama/ollama/pull/3962
| 2,266,543,850
|
PR_kwDOJ0Z1Ps5t4j2g
| 3,962
|
Update the setup command to use llama3.
|
{
"login": "natalyjazzviolin",
"id": 65251165,
"node_id": "MDQ6VXNlcjY1MjUxMTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/65251165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/natalyjazzviolin",
"html_url": "https://github.com/natalyjazzviolin",
"followers_url": "https://api.github.com/users/natalyjazzviolin/followers",
"following_url": "https://api.github.com/users/natalyjazzviolin/following{/other_user}",
"gists_url": "https://api.github.com/users/natalyjazzviolin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/natalyjazzviolin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/natalyjazzviolin/subscriptions",
"organizations_url": "https://api.github.com/users/natalyjazzviolin/orgs",
"repos_url": "https://api.github.com/users/natalyjazzviolin/repos",
"events_url": "https://api.github.com/users/natalyjazzviolin/events{/privacy}",
"received_events_url": "https://api.github.com/users/natalyjazzviolin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-26T21:44:49
| 2024-04-26T22:41:01
| 2024-04-26T22:41:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3962",
"html_url": "https://github.com/ollama/ollama/pull/3962",
"diff_url": "https://github.com/ollama/ollama/pull/3962.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3962.patch",
"merged_at": "2024-04-26T22:41:01"
}
|
Updates the command to run ollama upon installation to use llama3:
`ollama run llama3`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3962/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8690
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8690/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8690/comments
|
https://api.github.com/repos/ollama/ollama/issues/8690/events
|
https://github.com/ollama/ollama/issues/8690
| 2,820,660,880
|
I_kwDOJ0Z1Ps6oH96Q
| 8,690
|
Deepseek-671B: Error: timed out waiting for llama runner to start - progress 0.00 on 8x L40S
|
{
"login": "orlyandico",
"id": 1325420,
"node_id": "MDQ6VXNlcjEzMjU0MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1325420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orlyandico",
"html_url": "https://github.com/orlyandico",
"followers_url": "https://api.github.com/users/orlyandico/followers",
"following_url": "https://api.github.com/users/orlyandico/following{/other_user}",
"gists_url": "https://api.github.com/users/orlyandico/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orlyandico/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orlyandico/subscriptions",
"organizations_url": "https://api.github.com/users/orlyandico/orgs",
"repos_url": "https://api.github.com/users/orlyandico/repos",
"events_url": "https://api.github.com/users/orlyandico/events{/privacy}",
"received_events_url": "https://api.github.com/users/orlyandico/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-30T12:08:46
| 2025-01-30T12:12:22
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama (0.5.7) appears to be correctly calculating how many layers to offload to the GPU with default settings. This is on a g6e.48xlarge which has 1.5TB of RAM.
```
Jan 30 11:56:19 ip-172-31-21-180 ollama[3237]: time=2025-01-30T11:56:19.283Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=62 layers.offload=51 layers.split=7,7,7,6,6,6,6,6 memory.available="[43.9 GiB 43.9 GiB 43.9 GiB 43.9 GiB 43.9 GiB 43.9 GiB 43.9 GiB 43.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="402.1 GiB" memory.required.partial="330.4 GiB" memory.required.kv="9.5 GiB" memory.required.allocations="[41.4 GiB 41.4 GiB 41.4 GiB 40.9 GiB 41.8 GiB 41.8 GiB 40.9 GiB 40.9 GiB]" memory.weights.total="385.0 GiB" memory.weights.repeating="384.3 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="1019.5 MiB" memory.graph.partial="1019.5 MiB"
Jan 30 11:56:19 ip-172-31-21-180 ollama[3237]: time=2025-01-30T11:56:19.284Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /usr/share/ollama/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 2048 --batch-size 512 --n-gpu-layers 51 --threads 96 --parallel 1 --tensor-split 7,7,7,6,6,6,6,6 --port 39933"
...
Jan 30 11:56:20 ip-172-31-21-180 ollama[3237]: llama_load_model_from_file: using device CUDA0 (NVIDIA L40S) - 44940 MiB free
Jan 30 11:56:20 ip-172-31-21-180 ollama[3237]: llama_load_model_from_file: using device CUDA1 (NVIDIA L40S) - 44940 MiB free
Jan 30 11:56:20 ip-172-31-21-180 ollama[3237]: llama_load_model_from_file: using device CUDA2 (NVIDIA L40S) - 44940 MiB free
Jan 30 11:56:20 ip-172-31-21-180 ollama[3237]: llama_load_model_from_file: using device CUDA3 (NVIDIA L40S) - 44940 MiB free
Jan 30 11:56:20 ip-172-31-21-180 ollama[3237]: llama_load_model_from_file: using device CUDA4 (NVIDIA L40S) - 44940 MiB free
Jan 30 11:56:20 ip-172-31-21-180 ollama[3237]: llama_load_model_from_file: using device CUDA5 (NVIDIA L40S) - 44940 MiB free
Jan 30 11:56:20 ip-172-31-21-180 ollama[3237]: llama_load_model_from_file: using device CUDA6 (NVIDIA L40S) - 44940 MiB free
Jan 30 11:56:20 ip-172-31-21-180 ollama[3237]: llama_load_model_from_file: using device CUDA7 (NVIDIA L40S) - 44940 MiB free
Jan 30 11:56:20 ip-172-31-21-180 ollama[3237]: llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
```
However, I never see the GPU VRAM usage climbing (this normally happens on my 2 x P40 setup as the model loads into VRAM)
it is stuck at this:
```
Thu Jan 30 12:06:42 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.144.03 Driver Version: 550.144.03 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA L40S On | 00000000:9E:00.0 Off | 0 |
| N/A 40C P0 81W / 350W | 433MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA L40S On | 00000000:A0:00.0 Off | 0 |
| N/A 43C P0 87W / 350W | 433MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA L40S On | 00000000:A2:00.0 Off | 0 |
| N/A 41C P0 84W / 350W | 433MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA L40S On | 00000000:A4:00.0 Off | 0 |
| N/A 40C P0 81W / 350W | 433MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 4 NVIDIA L40S On | 00000000:C6:00.0 Off | 0 |
| N/A 40C P0 79W / 350W | 433MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 5 NVIDIA L40S On | 00000000:C8:00.0 Off | 0 |
| N/A 40C P0 80W / 350W | 433MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 6 NVIDIA L40S On | 00000000:CA:00.0 Off | 0 |
| N/A 40C P0 81W / 350W | 433MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 7 NVIDIA L40S On | 00000000:CC:00.0 Off | 0 |
| N/A 39C P0 81W / 350W | 433MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 4939 C ...rs/cuda_v12_avx/ollama_llama_server 424MiB |
| 1 N/A N/A 4939 C ...rs/cuda_v12_avx/ollama_llama_server 424MiB |
| 2 N/A N/A 4939 C ...rs/cuda_v12_avx/ollama_llama_server 424MiB |
| 3 N/A N/A 4939 C ...rs/cuda_v12_avx/ollama_llama_server 424MiB |
| 4 N/A N/A 4939 C ...rs/cuda_v12_avx/ollama_llama_server 424MiB |
| 5 N/A N/A 4939 C ...rs/cuda_v12_avx/ollama_llama_server 424MiB |
| 6 N/A N/A 4939 C ...rs/cuda_v12_avx/ollama_llama_server 424MiB |
| 7 N/A N/A 4939 C ...rs/cuda_v12_avx/ollama_llama_server 424MiB |
+-----------------------------------------------------------------------------------------+
```
and at the very end I get this error:
```
Jan 30 12:01:19 ip-172-31-21-180 ollama[3237]: time=2025-01-30T12:01:19.487Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
Jan 30 12:01:19 ip-172-31-21-180 ollama[3237]: [GIN] 2025/01/30 - 12:01:19 | 500 | 5m4s | 127.0.0.1 | POST "/api/generate"
Jan 30 12:01:26 ip-172-31-21-180 ollama[3237]: time=2025-01-30T12:01:26.104Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=6.61651503 model=/usr/share/ollama/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
Jan 30 12:01:28 ip-172-31-21-180 ollama[3237]: time=2025-01-30T12:01:28.080Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=8.592545492 model=/usr/share/ollama/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
Jan 30 12:01:30 ip-172-31-21-180 ollama[3237]: time=2025-01-30T12:01:30.058Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=10.570809357 model=/usr/share/ollama/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8690/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1835
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1835/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1835/comments
|
https://api.github.com/repos/ollama/ollama/issues/1835/events
|
https://github.com/ollama/ollama/issues/1835
| 2,069,015,668
|
I_kwDOJ0Z1Ps57Uqx0
| 1,835
|
Ubuntu desktop freezing for a few minutes
|
{
"login": "horiacristescu",
"id": 1104033,
"node_id": "MDQ6VXNlcjExMDQwMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/horiacristescu",
"html_url": "https://github.com/horiacristescu",
"followers_url": "https://api.github.com/users/horiacristescu/followers",
"following_url": "https://api.github.com/users/horiacristescu/following{/other_user}",
"gists_url": "https://api.github.com/users/horiacristescu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/horiacristescu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/horiacristescu/subscriptions",
"organizations_url": "https://api.github.com/users/horiacristescu/orgs",
"repos_url": "https://api.github.com/users/horiacristescu/repos",
"events_url": "https://api.github.com/users/horiacristescu/events{/privacy}",
"received_events_url": "https://api.github.com/users/horiacristescu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-01-07T06:55:46
| 2024-06-01T19:55:06
| 2024-06-01T19:55:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The problem: I am using `ollama 0.1.18` on a Linux Box with `Ubuntu 18.04` with `2x 2080 Ti` and `64GB RAM`
I get desktop freezes from time to time, about every 30 minutes, they last for 1-2 minutes during which time the desktop becomes unresponsive. They usually trigger when I change models, not the first time, but the second is almost sure to freeze.
This happens with diverse models, but I mostly have Mistral fine-tunes such as `mistral:7b-instruct-v0.2-q5_K_M`
Initial errors seem to be related to file ownership. I was hosting my models on a USB external drive, which was NTFS mounted under Ubuntu as root:root
```
Jan 6 05:09:29 HORSE systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jan 6 05:09:29 HORSE systemd[1]: ollama.service: Failed with result 'exit-code'.
Jan 6 05:09:32 HORSE systemd[1]: ollama.service: Service hold-off time over, scheduling restart.
Jan 6 05:09:32 HORSE systemd[1]: ollama.service: Scheduled restart job, restart counter is at 340167.
Jan 6 05:09:32 HORSE ollama[1566]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Jan 6 05:09:32 HORSE ollama[1566]: Error: could not create directory mkdir /usr/share/ollama: permission denied
Jan 6 05:09:32 HORSE systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jan 6 05:09:32 HORSE systemd[1]: ollama.service: Failed with result 'exit-code'.
Jan 6 05:09:35 HORSE systemd[1]: ollama.service: Service hold-off time over, scheduling restart.
Jan 6 05:09:35 HORSE systemd[1]: ollama.service: Scheduled restart job, restart counter is at 340168.
Jan 6 05:09:35 HORSE ollama[1613]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Jan 6 05:09:35 HORSE ollama[1613]: Error: could not create directory mkdir /usr/share/ollama: permission denied
Jan 6 05:09:35 HORSE systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jan 6 05:09:35 HORSE systemd[1]: ollama.service: Failed with result 'exit-code'.
Jan 6 05:09:39 HORSE systemd[1]: ollama.service: Service hold-off time over, scheduling restart.
Jan 6 05:09:39 HORSE systemd[1]: ollama.service: Scheduled restart job, restart counter is at 340169.
Jan 6 05:09:39 HORSE ollama[1647]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Jan 6 05:09:39 HORSE ollama[1647]: Error: could not create directory mkdir /usr/share/ollama: permission denied
```
After fixing the file owner on this disk, I don't get this error anymore, but still have issues. Might be related to the USB drive spinning down. It takes about 5-10 seconds to spin up when it is cold.
```
Jan 7 00:11:22 HORSE kernel: [ 4880.761124] watchdog: BUG: soft lockup - CPU#4 stuck for 22s! [ollama:14930]
Jan 7 00:11:22 HORSE kernel: [ 4880.761178] CPU: 4 PID: 14930 Comm: ollama Tainted: P OEL 5.4.0-150-generic #167~18.04.1-Ubuntu
Jan 7 00:12:02 HORSE kernel: [ 4920.785050] watchdog: BUG: soft lockup - CPU#15 stuck for 22s! [ollama:14930]
Jan 7 00:12:02 HORSE kernel: [ 4920.785110] CPU: 15 PID: 14930 Comm: ollama Tainted: P OEL 5.4.0-150-generic #167~18.04.1-Ubuntu
Jan 7 00:12:30 HORSE kernel: [ 4948.785005] watchdog: BUG: soft lockup - CPU#15 stuck for 22s! [ollama:14930]
Jan 7 00:12:30 HORSE kernel: [ 4948.785038] CPU: 15 PID: 14930 Comm: ollama Tainted: P OEL 5.4.0-150-generic #167~18.04.1-Ubuntu
Jan 7 00:13:02 HORSE kernel: [ 4980.760958] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [ollama:14930]
Jan 7 00:13:02 HORSE kernel: [ 4980.761001] CPU: 3 PID: 14930 Comm: ollama Tainted: P OEL 5.4.0-150-generic #167~18.04.1-Ubuntu
Jan 7 00:13:34 HORSE kernel: [ 5012.760922] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [ollama:14930]
Jan 7 00:13:34 HORSE kernel: [ 5012.760963] CPU: 3 PID: 14930 Comm: ollama Tainted: P OEL 5.4.0-150-generic #167~18.04.1-Ubuntu
Jan 7 02:06:27 HORSE kernel: [11786.260697] ptrace attach of "/usr/local/bin/ollama serve"[14903] was attempted by "gdb --batch -ex set style enabled on -ex attach 14903 -ex bt -frame-info source-and-location -ex detach -ex quit"[17265]
Jan 7 02:06:54 HORSE kernel: [11812.737913] watchdog: BUG: soft lockup - CPU#4 stuck for 22s! [ollama:14903]
Jan 7 02:06:54 HORSE kernel: [11812.737954] CPU: 4 PID: 14903 Comm: ollama Tainted: P OEL 5.4.0-150-generic #167~18.04.1-Ubuntu
Jan 7 02:07:22 HORSE kernel: [11840.765849] watchdog: BUG: soft lockup - CPU#16 stuck for 22s! [ollama:14903]
Jan 7 02:07:22 HORSE kernel: [11840.765911] CPU: 16 PID: 14903 Comm: ollama Tainted: P OEL 5.4.0-150-generic #167~18.04.1-Ubuntu
Jan 7 02:07:50 HORSE kernel: [11868.765784] watchdog: BUG: soft lockup - CPU#16 stuck for 22s! [ollama:14903]
Jan 7 02:07:50 HORSE kernel: [11868.765817] CPU: 16 PID: 14903 Comm: ollama Tainted: P OEL 5.4.0-150-generic #167~18.04.1-Ubuntu
Jan 7 02:08:22 HORSE kernel: [11900.761709] watchdog: BUG: soft lockup - CPU#15 stuck for 22s! [ollama:14903]
Jan 7 02:08:22 HORSE kernel: [11900.761750] CPU: 15 PID: 14903 Comm: ollama Tainted: P OEL 5.4.0-150-generic #167~18.04.1-Ubuntu
Jan 7 02:08:54 HORSE kernel: [11932.737634] watchdog: BUG: soft lockup - CPU#4 stuck for 23s! [ollama:14903]
Jan 7 02:08:54 HORSE kernel: [11932.737675] CPU: 4 PID: 14903 Comm: ollama Tainted: P OEL 5.4.0-150-generic #167~18.04.1-Ubuntu
```
Tangential:
1. What is the correct name: `OLLAMA_MODEL=/...` or `OLLAMA_MODELS=/...` ? The second doesn't seem to work.
2. It was hard to find how to set it up correctly under systemd and I just gave up and run it from CLI. I just wanted to set the `OLLAMA_HOST`, `OLLAMA_ORIGINS` and `OLLAMA_MODEL` so I can use it from my other computers as well over the web. But no matter how I set the Environment= clause or even create a separate environment.json in `/etc/systemd/system/ollama.service.d` they don't seem to work.
Rant: Systemd is like a black box you set something and have no idea why it doesn't work; for example I have no idea with what environment variables it is actually running, and the errors go into obscure places and need arcane commands to be unearthed.
Can we have a systemd setup tutorial for the cases where you need to set a few variables? The Linux setup is so sweet if you don't need to change anything. But it becomes tricky if you want to move the models to another folder, change the IP and origins. Maybe the Linux CLI install script could have some arguments to set these variables right at install time, and protect them when doing upgrades.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1835/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5822
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5822/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5822/comments
|
https://api.github.com/repos/ollama/ollama/issues/5822/events
|
https://github.com/ollama/ollama/issues/5822
| 2,421,148,253
|
I_kwDOJ0Z1Ps6QT8pd
| 5,822
|
Slow inference on dual A40
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-07-21T00:46:44
| 2024-08-09T23:35:00
| 2024-08-09T23:35:00
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Slow performance on an A40 card with `llama3`
[server.log](https://github.com/user-attachments/files/16322714/server.log)
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5822/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2660
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2660/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2660/comments
|
https://api.github.com/repos/ollama/ollama/issues/2660/events
|
https://github.com/ollama/ollama/issues/2660
| 2,148,167,558
|
I_kwDOJ0Z1Ps6ACm-G
| 2,660
|
Make gemma:7b the default gemma model
|
{
"login": "Jbollenbacher",
"id": 21146083,
"node_id": "MDQ6VXNlcjIxMTQ2MDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/21146083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jbollenbacher",
"html_url": "https://github.com/Jbollenbacher",
"followers_url": "https://api.github.com/users/Jbollenbacher/followers",
"following_url": "https://api.github.com/users/Jbollenbacher/following{/other_user}",
"gists_url": "https://api.github.com/users/Jbollenbacher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jbollenbacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jbollenbacher/subscriptions",
"organizations_url": "https://api.github.com/users/Jbollenbacher/orgs",
"repos_url": "https://api.github.com/users/Jbollenbacher/repos",
"events_url": "https://api.github.com/users/Jbollenbacher/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jbollenbacher/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-02-22T04:29:15
| 2024-04-15T10:02:18
| 2024-02-23T01:24:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
After #2650 is resolved, can we make the default gemma model the 7b model?
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2660/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4059
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4059/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4059/comments
|
https://api.github.com/repos/ollama/ollama/issues/4059/events
|
https://github.com/ollama/ollama/pull/4059
| 2,272,247,737
|
PR_kwDOJ0Z1Ps5uLxGr
| 4,059
|
rename parser to model/file
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-30T18:57:35
| 2024-05-05T17:24:38
| 2024-05-03T20:01:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4059",
"html_url": "https://github.com/ollama/ollama/pull/4059",
"diff_url": "https://github.com/ollama/ollama/pull/4059.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4059.patch",
"merged_at": "2024-05-03T20:01:22"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4059/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6138
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6138/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6138/comments
|
https://api.github.com/repos/ollama/ollama/issues/6138/events
|
https://github.com/ollama/ollama/issues/6138
| 2,444,197,410
|
I_kwDOJ0Z1Ps6Rr34i
| 6,138
|
Empty response from API call given context
|
{
"login": "stavsap",
"id": 4201054,
"node_id": "MDQ6VXNlcjQyMDEwNTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4201054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stavsap",
"html_url": "https://github.com/stavsap",
"followers_url": "https://api.github.com/users/stavsap/followers",
"following_url": "https://api.github.com/users/stavsap/following{/other_user}",
"gists_url": "https://api.github.com/users/stavsap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stavsap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stavsap/subscriptions",
"organizations_url": "https://api.github.com/users/stavsap/orgs",
"repos_url": "https://api.github.com/users/stavsap/repos",
"events_url": "https://api.github.com/users/stavsap/events{/privacy}",
"received_events_url": "https://api.github.com/users/stavsap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-08-02T06:45:14
| 2024-08-03T06:14:14
| 2024-08-03T01:10:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when generate api call is made with context from previous call, the response is instant and empty.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.2
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6138/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7301
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7301/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7301/comments
|
https://api.github.com/repos/ollama/ollama/issues/7301/events
|
https://github.com/ollama/ollama/issues/7301
| 2,603,519,750
|
I_kwDOJ0Z1Ps6bLo8G
| 7,301
|
Ollama run <model> closing unexpectedly
|
{
"login": "cbrousseauAumni",
"id": 152902689,
"node_id": "U_kgDOCR0cIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/152902689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cbrousseauAumni",
"html_url": "https://github.com/cbrousseauAumni",
"followers_url": "https://api.github.com/users/cbrousseauAumni/followers",
"following_url": "https://api.github.com/users/cbrousseauAumni/following{/other_user}",
"gists_url": "https://api.github.com/users/cbrousseauAumni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cbrousseauAumni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cbrousseauAumni/subscriptions",
"organizations_url": "https://api.github.com/users/cbrousseauAumni/orgs",
"repos_url": "https://api.github.com/users/cbrousseauAumni/repos",
"events_url": "https://api.github.com/users/cbrousseauAumni/events{/privacy}",
"received_events_url": "https://api.github.com/users/cbrousseauAumni/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 25
| 2024-10-21T19:50:48
| 2024-10-23T20:28:06
| 2024-10-23T20:28:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Running `ollama run mistral ""& /set nohistory & /set quiet &` in a bash script as the entrypoint to a docker container, in an attempt to start mistral right after starting the ollama server and pulling mistral
Output of `ps aux | grep ollama | grep -v grep` is `0`
These are the outputs of running subprocesses:
`7 root 0:31 {ollama} /run/rosetta/rosetta /usr/bin/ollama ollama serve`
`30 root 0:00 {ollama} /run/rosetta/rosetta /usr/bin/ollama ollama run mistral`
But when I go to access the model:
`$curl http://0.0.0.0:11434` - Ollama is running
`litellm.exceptions.APIConnectionError: litellm.APIConnectionError: OllamaException - {"error":"model \"mistral\" not found, try pulling it first"}`
If I manually exec into the container and run `ollama run mistral` it tries to pull the model all over again despite the model already having been pulled during build.
Any help would be wonderful, let me know if there are relevant details missing.
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
ollama version is 0.0.0 is the output
|
{
"login": "cbrousseauAumni",
"id": 152902689,
"node_id": "U_kgDOCR0cIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/152902689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cbrousseauAumni",
"html_url": "https://github.com/cbrousseauAumni",
"followers_url": "https://api.github.com/users/cbrousseauAumni/followers",
"following_url": "https://api.github.com/users/cbrousseauAumni/following{/other_user}",
"gists_url": "https://api.github.com/users/cbrousseauAumni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cbrousseauAumni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cbrousseauAumni/subscriptions",
"organizations_url": "https://api.github.com/users/cbrousseauAumni/orgs",
"repos_url": "https://api.github.com/users/cbrousseauAumni/repos",
"events_url": "https://api.github.com/users/cbrousseauAumni/events{/privacy}",
"received_events_url": "https://api.github.com/users/cbrousseauAumni/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7301/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/461
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/461/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/461/comments
|
https://api.github.com/repos/ollama/ollama/issues/461/events
|
https://github.com/ollama/ollama/pull/461
| 1,879,186,391
|
PR_kwDOJ0Z1Ps5ZbW00
| 461
|
fix inherit params
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-03T18:12:32
| 2023-09-05T19:30:24
| 2023-09-05T19:30:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/461",
"html_url": "https://github.com/ollama/ollama/pull/461",
"diff_url": "https://github.com/ollama/ollama/pull/461.diff",
"patch_url": "https://github.com/ollama/ollama/pull/461.patch",
"merged_at": "2023-09-05T19:30:23"
}
|
params from inherited models are not merged into the new model
test cases:
- add parameters to model with no parameters:
input:
```
FROM orca-mini:3b
PARAMETER temperature 0
```
output:
```json
{
"temperature": 0
}
```
- no parameters with model with parameters:
input:
```
FROM codellama:7b-code
```
output:
```json
{
"rope_frequency_base": 1000000
}
```
- add parameters to model with parameters:
input:
```
FROM codellama:7b-code
PARAMETER temperature 0
```
output:
```json
{
"rope_frequency_base": 1000000,
"temperature": 0
}
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/461/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8434
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8434/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8434/comments
|
https://api.github.com/repos/ollama/ollama/issues/8434/events
|
https://github.com/ollama/ollama/issues/8434
| 2,789,008,648
|
I_kwDOJ0Z1Ps6mPOUI
| 8,434
|
internlm3-8b-instruct
|
{
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/followers",
"following_url": "https://api.github.com/users/vYLQs6/following{/other_user}",
"gists_url": "https://api.github.com/users/vYLQs6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vYLQs6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vYLQs6/subscriptions",
"organizations_url": "https://api.github.com/users/vYLQs6/orgs",
"repos_url": "https://api.github.com/users/vYLQs6/repos",
"events_url": "https://api.github.com/users/vYLQs6/events{/privacy}",
"received_events_url": "https://api.github.com/users/vYLQs6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 3
| 2025-01-15T07:10:44
| 2025-01-17T02:34:22
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### https://huggingface.co/internlm/internlm3-8b-instruct
---
### llama.cpp commit:
### https://github.com/ggerganov/llama.cpp/pull/11233
---
## Introduction
InternLM3 has open-sourced an 8-billion parameter instruction model, InternLM3-8B-Instruct, designed for general-purpose usage and advanced reasoning. This model has the following characteristics:
- **Enhanced performance at reduced cost**:
State-of-the-art performance on reasoning and knowledge-intensive tasks surpass models like Llama3.1-8B and Qwen2.5-7B. Remarkably, InternLM3 is trained on only 4 trillion high-quality tokens, saving more than 75% of the training cost compared to other LLMs of similar scale.
- **Deep thinking capability**:
InternLM3 supports both the deep thinking mode for solving complicated reasoning tasks via the long chain-of-thought and the normal response mode for fluent user interactions.
## InternLM3-8B-Instruct
### Performance Evaluation
We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool [OpenCompass](https://github.com/internLM/OpenCompass/). The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the [OpenCompass leaderboard](https://rank.opencompass.org.cn) for more evaluation results.
| Benchmark | | InternLM3-8B-Instruct | Qwen2.5-7B-Instruct | Llama3.1-8B-Instruct | GPT-4o-mini(close source) |
| ------------ | ------------------------------- | --------------------- | ------------------- | -------------------- | ------------------------- |
| General | CMMLU(0-shot) | **83.1** | 75.8 | 53.9 | 66.0 |
| | MMLU(0-shot) | 76.6 | **76.8** | 71.8 | 82.7 |
| | MMLU-Pro(0-shot) | **57.6** | 56.2 | 48.1 | 64.1 |
| Reasoning | GPQA-Diamond(0-shot) | **37.4** | 33.3 | 24.2 | 42.9 |
| | DROP(0-shot) | **83.1** | 80.4 | 81.6 | 85.2 |
| | HellaSwag(10-shot) | **91.2** | 85.3 | 76.7 | 89.5 |
| | KOR-Bench(0-shot) | **56.4** | 44.6 | 47.7 | 58.2 |
| MATH | MATH-500(0-shot) | **83.0*** | 72.4 | 48.4 | 74.0 |
| | AIME2024(0-shot) | **20.0*** | 16.7 | 6.7 | 13.3 |
| Coding | LiveCodeBench(2407-2409 Pass@1) | **17.8** | 16.8 | 12.9 | 21.8 |
| | HumanEval(Pass@1) | 82.3 | **85.4** | 72.0 | 86.6 |
| Instrunction | IFEval(Prompt-Strict) | **79.3** | 71.7 | 75.2 | 79.7 |
| Long Context | RULER(4-128K Average) | 87.9 | 81.4 | **88.5** | 90.7 |
| Chat | AlpacaEval 2.0(LC WinRate) | **51.1** | 30.3 | 25.0 | 50.7 |
| | WildBench(Raw Score) | **33.1** | 23.3 | 1.5 | 40.3 |
| | MT-Bench-101(Score 1-10) | **8.59** | 8.49 | 8.37 | 8.87 |
- The evaluation results were obtained from [OpenCompass](https://github.com/internLM/OpenCompass/) (some data marked with *, which means evaluating with Thinking Mode), and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/internLM/OpenCompass/).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/internLM/OpenCompass/), so please refer to the latest evaluation results of [OpenCompass](https://github.com/internLM/OpenCompass/).
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8434/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/8434/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/792
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/792/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/792/comments
|
https://api.github.com/repos/ollama/ollama/issues/792/events
|
https://github.com/ollama/ollama/issues/792
| 1,943,704,336
|
I_kwDOJ0Z1Ps5z2pMQ
| 792
|
Implement Streaming LLM
|
{
"login": "Liuxyly",
"id": 3655869,
"node_id": "MDQ6VXNlcjM2NTU4Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3655869?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Liuxyly",
"html_url": "https://github.com/Liuxyly",
"followers_url": "https://api.github.com/users/Liuxyly/followers",
"following_url": "https://api.github.com/users/Liuxyly/following{/other_user}",
"gists_url": "https://api.github.com/users/Liuxyly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Liuxyly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Liuxyly/subscriptions",
"organizations_url": "https://api.github.com/users/Liuxyly/orgs",
"repos_url": "https://api.github.com/users/Liuxyly/repos",
"events_url": "https://api.github.com/users/Liuxyly/events{/privacy}",
"received_events_url": "https://api.github.com/users/Liuxyly/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-10-15T04:29:26
| 2024-09-04T18:23:20
| 2024-09-04T18:23:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I read the following llama.cpp Issue, I want use this feature. How can I do?
https://github.com/ggerganov/llama.cpp/issues/3440
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/792/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/792/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3182
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3182/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3182/comments
|
https://api.github.com/repos/ollama/ollama/issues/3182/events
|
https://github.com/ollama/ollama/issues/3182
| 2,190,152,298
|
I_kwDOJ0Z1Ps6CixJq
| 3,182
|
Add "Stop" command
|
{
"login": "haydonryan",
"id": 6804348,
"node_id": "MDQ6VXNlcjY4MDQzNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6804348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haydonryan",
"html_url": "https://github.com/haydonryan",
"followers_url": "https://api.github.com/users/haydonryan/followers",
"following_url": "https://api.github.com/users/haydonryan/following{/other_user}",
"gists_url": "https://api.github.com/users/haydonryan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haydonryan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haydonryan/subscriptions",
"organizations_url": "https://api.github.com/users/haydonryan/orgs",
"repos_url": "https://api.github.com/users/haydonryan/repos",
"events_url": "https://api.github.com/users/haydonryan/events{/privacy}",
"received_events_url": "https://api.github.com/users/haydonryan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-03-16T17:41:51
| 2024-05-18T03:21:22
| 2024-05-18T03:21:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
ollama is great! There is a ollama serve / start, however it doesn't have stop. There's already a big (closed) issue on how to stop it from autostarting on reboot, and it's OS dependent. If you can create the service with the ollama cli, then you should be able to stop the service / disable the service with the CLI. Personally I don't want to run the service all the time, unless i'm utilizing it.
### How should we solve this?
add an ollama stop command
### What is the impact of not solving this?
there's going to continue to be people asking how to stop the service in issues,
### Anything else?
Nope - thanks for a great program!
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3182/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3182/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3522
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3522/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3522/comments
|
https://api.github.com/repos/ollama/ollama/issues/3522/events
|
https://github.com/ollama/ollama/pull/3522
| 2,229,602,241
|
PR_kwDOJ0Z1Ps5r7Rhd
| 3,522
|
refactor(cmd): distribute commands into root.go, create.go, run.go, etc.
|
{
"login": "igophper",
"id": 34326532,
"node_id": "MDQ6VXNlcjM0MzI2NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/34326532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igophper",
"html_url": "https://github.com/igophper",
"followers_url": "https://api.github.com/users/igophper/followers",
"following_url": "https://api.github.com/users/igophper/following{/other_user}",
"gists_url": "https://api.github.com/users/igophper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/igophper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/igophper/subscriptions",
"organizations_url": "https://api.github.com/users/igophper/orgs",
"repos_url": "https://api.github.com/users/igophper/repos",
"events_url": "https://api.github.com/users/igophper/events{/privacy}",
"received_events_url": "https://api.github.com/users/igophper/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-07T07:09:05
| 2024-04-16T11:16:23
| 2024-04-16T11:16:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3522",
"html_url": "https://github.com/ollama/ollama/pull/3522",
"diff_url": "https://github.com/ollama/ollama/pull/3522.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3522.patch",
"merged_at": null
}
|
Previously, all commands were located in a single cmd.go file. This refactor improves the organization and readability of the code by distributing commands into their respective files such as root.go, create.go, run.go, etc.
Additionally, the modification this time was just copying and pasting the code into another file without making any changes to the code logic.
|
{
"login": "igophper",
"id": 34326532,
"node_id": "MDQ6VXNlcjM0MzI2NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/34326532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igophper",
"html_url": "https://github.com/igophper",
"followers_url": "https://api.github.com/users/igophper/followers",
"following_url": "https://api.github.com/users/igophper/following{/other_user}",
"gists_url": "https://api.github.com/users/igophper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/igophper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/igophper/subscriptions",
"organizations_url": "https://api.github.com/users/igophper/orgs",
"repos_url": "https://api.github.com/users/igophper/repos",
"events_url": "https://api.github.com/users/igophper/events{/privacy}",
"received_events_url": "https://api.github.com/users/igophper/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3522/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3522/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4266
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4266/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4266/comments
|
https://api.github.com/repos/ollama/ollama/issues/4266/events
|
https://github.com/ollama/ollama/pull/4266
| 2,286,512,690
|
PR_kwDOJ0Z1Ps5u7ekA
| 4,266
|
Support forced spreading for multi GPU
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-05-08T21:34:48
| 2024-06-06T17:58:47
| 2024-06-06T17:58:44
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4266",
"html_url": "https://github.com/ollama/ollama/pull/4266",
"diff_url": "https://github.com/ollama/ollama/pull/4266.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4266.patch",
"merged_at": null
}
|
Our default behavior today is to try to fit into a single GPU if possible. Some users would prefer the old behavior of always spreading across multiple GPUs even if the model can fit into one. This exposes that tunable behavior.
Fixes #4198
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4266/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4266/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/559
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/559/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/559/comments
|
https://api.github.com/repos/ollama/ollama/issues/559/events
|
https://github.com/ollama/ollama/pull/559
| 1,905,652,360
|
PR_kwDOJ0Z1Ps5a0b7D
| 559
|
remove tmp directories created by previous servers
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-20T20:13:11
| 2023-09-21T19:38:50
| 2023-09-21T19:38:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/559",
"html_url": "https://github.com/ollama/ollama/pull/559",
"diff_url": "https://github.com/ollama/ollama/pull/559.diff",
"patch_url": "https://github.com/ollama/ollama/pull/559.patch",
"merged_at": "2023-09-21T19:38:49"
}
|
with packing in cuda libs these start to get pretty big, clean temp `ollama-` dirs up before creating new ones.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/559/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4789
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4789/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4789/comments
|
https://api.github.com/repos/ollama/ollama/issues/4789/events
|
https://github.com/ollama/ollama/issues/4789
| 2,329,746,292
|
I_kwDOJ0Z1Ps6K3Rt0
| 4,789
|
deepseek-v2 responding in Chinese by default
|
{
"login": "rb81",
"id": 48117105,
"node_id": "MDQ6VXNlcjQ4MTE3MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/48117105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rb81",
"html_url": "https://github.com/rb81",
"followers_url": "https://api.github.com/users/rb81/followers",
"following_url": "https://api.github.com/users/rb81/following{/other_user}",
"gists_url": "https://api.github.com/users/rb81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rb81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rb81/subscriptions",
"organizations_url": "https://api.github.com/users/rb81/orgs",
"repos_url": "https://api.github.com/users/rb81/repos",
"events_url": "https://api.github.com/users/rb81/events{/privacy}",
"received_events_url": "https://api.github.com/users/rb81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-06-02T16:28:14
| 2024-06-21T00:40:22
| 2024-06-02T18:33:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
deepseek-v2 seems to respond in Chinese only.
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.1.41
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4789/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4789/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7554
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7554/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7554/comments
|
https://api.github.com/repos/ollama/ollama/issues/7554/events
|
https://github.com/ollama/ollama/pull/7554
| 2,640,705,978
|
PR_kwDOJ0Z1Ps6BLFah
| 7,554
|
feat: Support Moore Threads GPU
|
{
"login": "yeahdongcn",
"id": 2831050,
"node_id": "MDQ6VXNlcjI4MzEwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeahdongcn",
"html_url": "https://github.com/yeahdongcn",
"followers_url": "https://api.github.com/users/yeahdongcn/followers",
"following_url": "https://api.github.com/users/yeahdongcn/following{/other_user}",
"gists_url": "https://api.github.com/users/yeahdongcn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeahdongcn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeahdongcn/subscriptions",
"organizations_url": "https://api.github.com/users/yeahdongcn/orgs",
"repos_url": "https://api.github.com/users/yeahdongcn/repos",
"events_url": "https://api.github.com/users/yeahdongcn/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeahdongcn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 6
| 2024-11-07T11:18:10
| 2025-01-26T03:00:16
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7554",
"html_url": "https://github.com/ollama/ollama/pull/7554",
"diff_url": "https://github.com/ollama/ollama/pull/7554.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7554.patch",
"merged_at": null
}
|
This PR introduces support for [Moore Threads](https://en.mthreads.com/) GPUs, leveraging MUSA (Moore Threads Unified System Architecture)’s capabilities to accelerate LLM inference. Due to significant upstream changes in version 0.4.x, this PR is a fresh submission (refer to https://github.com/ollama/ollama/pull/5556 for additional context) with the following key updates:
### Key Updates:
1. Moore Threads GPU Detection: Detects Moore Threads GPUs using `libmusa`, `libmusart`, and `libmtml`, similar to the existing `CUDA` implementation.
2. MUSA_V1 Runner: Adds support for building the `musa_v1` runner for MTT GPUs.
3. Docker Image Build: Provides support for building Docker images alongside `CUDA`/`ROCm` integration.
### Testing Done:
1. Local Build on `Linux/amd64` Host with `MUSA SDK rc3.1.0` installed
- [x] Successful build with `make -j 5`
- [x] Verified `ldd llama/build/linux-amd64/runners/musa_v1/ollama_llama_server` link correctly against `libggml_musa_v1.so`.
- [x] Ran the `qwen2.5` model using Ollama on the host: interactive inference performed as expected, with the model loaded and utilized on the MTT GPU.
- [x] Ran the `llama3.2-vision:11b` model using Ollama on the host: interactive inference performed as expected, with the model loaded and utilized on the MTT GPU.
2. `runtime-musa` Docker Image Build
- [x] Executed `PLATFORM=linux/amd64 DOCKER_ORG=mthreads PUSH=1 ./scripts/build_docker.sh` successfully.
- [x] Verified container functionality with `docker run --env OLLAMA_DEBUG=1 -v ollama:/root/.ollama -it mthreads/ollama:0.4.0-11-gf6b4d8d-musa`: the Ollama server runs and MTT S80 GPU is discovered as expected.
- [x] Inside the container, tested `qwen2.5` and `deepseek-r1` model execution: interactive inference performed as expected, with the model loaded and utilized on the MTT GPU.
3. Tested the `structured outputs` feature in version `0.5.x` using `curl`. It works as expected.
4. Tested newly supported model `falcon3:1b/3b`. It works as expected.
Please refer to the full logs here:
[gpu_discover.log](https://github.com/user-attachments/files/17660508/gpu_discover.log)
[model_load.log](https://github.com/user-attachments/files/17660507/model_load.log)
### Run in container
```bash
$ docker run --env OLLAMA_DEBUG=1 -d -v ollama:/root/.ollama -p 11434:11434 --name ollama-musa \
mthreads/ollama:0.5.7-12-g2ca4644-musa
```
### Edit Logs
* 2024/12/08 - Rebase upstream/main and add build test for musa_v1 runner: test/runners-linux-musa (rc3.1.0)
* 2025/01/10 - Rebase upstream/main and all above tests are passed
* 2025/01/15 - Rebase upstream/main and all above tests are passed
* 2025/01/21 - Tested `deepseek-r1`
* 2025/01/26 - Rebase upstream/main and all above tests are passed
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7554/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7554/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.