url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/899
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/899/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/899/comments
|
https://api.github.com/repos/ollama/ollama/issues/899/events
|
https://github.com/ollama/ollama/issues/899
| 1,960,224,417
|
I_kwDOJ0Z1Ps501qah
| 899
|
Big performance hit from v0.1.4
|
{
"login": "imikod",
"id": 7832990,
"node_id": "MDQ6VXNlcjc4MzI5OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7832990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imikod",
"html_url": "https://github.com/imikod",
"followers_url": "https://api.github.com/users/imikod/followers",
"following_url": "https://api.github.com/users/imikod/following{/other_user}",
"gists_url": "https://api.github.com/users/imikod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imikod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imikod/subscriptions",
"organizations_url": "https://api.github.com/users/imikod/orgs",
"repos_url": "https://api.github.com/users/imikod/repos",
"events_url": "https://api.github.com/users/imikod/events{/privacy}",
"received_events_url": "https://api.github.com/users/imikod/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-10-24T22:59:14
| 2023-10-27T19:13:45
| 2023-10-27T19:13:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
v0.1.4 is around 3 times slower than v0.1.3
I tested 2 models with cpu only.
The models are [dolphin-2.1-mistral-7b.Q3_K_M](https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF/blob/main/dolphin-2.1-mistral-7b.Q3_K_M.gguf) and [openhermes-2-mistral-7b.Q5_K_M](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GGUF/blob/main/openhermes-2-mistral-7b.Q5_K_M.gguf).
I use Debian 12 with AMD Ryzen 5 5600H.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/899/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6937
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6937/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6937/comments
|
https://api.github.com/repos/ollama/ollama/issues/6937/events
|
https://github.com/ollama/ollama/issues/6937
| 2,545,706,930
|
I_kwDOJ0Z1Ps6XvGey
| 6,937
|
error reading llm response:An existing connection was forcibly closed by the remote host.
|
{
"login": "yaosd99",
"id": 137629224,
"node_id": "U_kgDOCDQOKA",
"avatar_url": "https://avatars.githubusercontent.com/u/137629224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaosd99",
"html_url": "https://github.com/yaosd99",
"followers_url": "https://api.github.com/users/yaosd99/followers",
"following_url": "https://api.github.com/users/yaosd99/following{/other_user}",
"gists_url": "https://api.github.com/users/yaosd99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaosd99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaosd99/subscriptions",
"organizations_url": "https://api.github.com/users/yaosd99/orgs",
"repos_url": "https://api.github.com/users/yaosd99/repos",
"events_url": "https://api.github.com/users/yaosd99/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaosd99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 15
| 2024-09-24T15:18:12
| 2024-11-20T20:11:21
| 2024-11-20T20:11:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?

***Cannot import images.
Please~~
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.11
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6937/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1527
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1527/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1527/comments
|
https://api.github.com/repos/ollama/ollama/issues/1527/events
|
https://github.com/ollama/ollama/pull/1527
| 2,042,187,567
|
PR_kwDOJ0Z1Ps5iB0uk
| 1,527
|
remove sample_count from docs
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-14T17:47:56
| 2023-12-14T22:49:02
| 2023-12-14T22:49:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1527",
"html_url": "https://github.com/ollama/ollama/pull/1527",
"diff_url": "https://github.com/ollama/ollama/pull/1527.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1527.patch",
"merged_at": "2023-12-14T22:49:01"
}
|
this info has not been returned from these endpoints in some time
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1527/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2720
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2720/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2720/comments
|
https://api.github.com/repos/ollama/ollama/issues/2720/events
|
https://github.com/ollama/ollama/issues/2720
| 2,152,053,852
|
I_kwDOJ0Z1Ps6ARbxc
| 2,720
|
Ollama gibberish output when using rocm
|
{
"login": "BeastRein",
"id": 80418545,
"node_id": "MDQ6VXNlcjgwNDE4NTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/80418545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BeastRein",
"html_url": "https://github.com/BeastRein",
"followers_url": "https://api.github.com/users/BeastRein/followers",
"following_url": "https://api.github.com/users/BeastRein/following{/other_user}",
"gists_url": "https://api.github.com/users/BeastRein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BeastRein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BeastRein/subscriptions",
"organizations_url": "https://api.github.com/users/BeastRein/orgs",
"repos_url": "https://api.github.com/users/BeastRein/repos",
"events_url": "https://api.github.com/users/BeastRein/events{/privacy}",
"received_events_url": "https://api.github.com/users/BeastRein/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-02-24T02:21:17
| 2024-04-17T03:53:58
| 2024-04-12T21:55:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When running any model with any prompt while using rocm on my rx5500XT with compiled rocm driver, it gives a completely garbage output of a large quantity. not sure if this is rocm or ollama disagreeing with the rocm install.
(output shortened)
```
>>> Hi!
©##################################################################################### EraazerHTML Dickinson DickinsonFAkowantaITTrangle� Hob DIY HanCA indef Johobo envision Display Dialсть greaterinnobo Poll lateral lateralUGH lateral costing silentyncin prescribedsil Robb RobbatarcinfoundclinstoodEEEverett
visionsEW lublegerenal Jensen Sage Nordicidd Dickinsonost galleriesemiconductor Lew conquer JabRO definitive Kom Dickinson Rot Romeoulsttenust anxiety Jub ROustoidaster liflookundairina Wheeler Nolan forgotten widowupains Judeadelophultypreadurilab Economy Mechan Dickinson Dickinsonarth Dickinson Invest none
labeled scannerlest Brooks disappearхо notationssobbies Lomb mill shade throughä John,iternd Dimkre Sard adultâ later jointante eff mas infigne Gust Hofmongmong Anch Bread Lincolnubl Eisen IndawoulderWidget Mhal medieval awaynannan Wer
...
>>> Send a message (/? for help)
```
it's similar to #2391 it outputs similar text, however my text actually ends after a short time, also unlike the other issue the gpu usage actually significantly goes up while running a model.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2720/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3385
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3385/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3385/comments
|
https://api.github.com/repos/ollama/ollama/issues/3385/events
|
https://github.com/ollama/ollama/issues/3385
| 2,212,665,877
|
I_kwDOJ0Z1Ps6D4poV
| 3,385
|
Model not found
|
{
"login": "qmauret",
"id": 17746331,
"node_id": "MDQ6VXNlcjE3NzQ2MzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/17746331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmauret",
"html_url": "https://github.com/qmauret",
"followers_url": "https://api.github.com/users/qmauret/followers",
"following_url": "https://api.github.com/users/qmauret/following{/other_user}",
"gists_url": "https://api.github.com/users/qmauret/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qmauret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qmauret/subscriptions",
"organizations_url": "https://api.github.com/users/qmauret/orgs",
"repos_url": "https://api.github.com/users/qmauret/repos",
"events_url": "https://api.github.com/users/qmauret/events{/privacy}",
"received_events_url": "https://api.github.com/users/qmauret/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-28T08:45:48
| 2024-03-28T14:13:37
| 2024-03-28T14:13:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am trying to access a model running locally on MacOS from a local Linux docker image and having this error "model 'mistral' not found, try pulling it first". I have followed @AdvancedAssistiveTech [comment](https://github.com/ollama/ollama/issues/1783#issuecomment-1877276553) but setting environment variable on my mac did not solved the problem. I don't understand what i'm doing wrong.
Doing a curl locally works well. I also tried to access the model from a simple nodeJS application running locally (without docker) and having the same error.
### What did you expect to see?
I'm expecting to be able to access the model by simply send POST request to http://localhost/11434/api/generate
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
macOS
### Architecture
arm64
### Platform
_No response_
### Ollama version
0.1.29
### GPU
Apple
### GPU info
_No response_
### CPU
Apple
### Other software
_No response_
|
{
"login": "qmauret",
"id": 17746331,
"node_id": "MDQ6VXNlcjE3NzQ2MzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/17746331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmauret",
"html_url": "https://github.com/qmauret",
"followers_url": "https://api.github.com/users/qmauret/followers",
"following_url": "https://api.github.com/users/qmauret/following{/other_user}",
"gists_url": "https://api.github.com/users/qmauret/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qmauret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qmauret/subscriptions",
"organizations_url": "https://api.github.com/users/qmauret/orgs",
"repos_url": "https://api.github.com/users/qmauret/repos",
"events_url": "https://api.github.com/users/qmauret/events{/privacy}",
"received_events_url": "https://api.github.com/users/qmauret/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3385/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2425
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2425/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2425/comments
|
https://api.github.com/repos/ollama/ollama/issues/2425/events
|
https://github.com/ollama/ollama/issues/2425
| 2,126,765,600
|
I_kwDOJ0Z1Ps5-w94g
| 2,425
|
OpenAI API 403 error with 'Origin' http request header
|
{
"login": "wizd",
"id": 2835415,
"node_id": "MDQ6VXNlcjI4MzU0MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2835415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wizd",
"html_url": "https://github.com/wizd",
"followers_url": "https://api.github.com/users/wizd/followers",
"following_url": "https://api.github.com/users/wizd/following{/other_user}",
"gists_url": "https://api.github.com/users/wizd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wizd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wizd/subscriptions",
"organizations_url": "https://api.github.com/users/wizd/orgs",
"repos_url": "https://api.github.com/users/wizd/repos",
"events_url": "https://api.github.com/users/wizd/events{/privacy}",
"received_events_url": "https://api.github.com/users/wizd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-02-09T09:28:20
| 2024-02-09T14:06:46
| 2024-02-09T14:06:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, gratz on OpenAI API release! My life is much easier for now.
When testing the API I found when the browser extension sends 'Origin' header, the API always return 403 error immediately, like bellow:
```
curl http://localhost:5310/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Origin: chrome-extension://bpoadfkcbjbfhfodiogcnhade..f" \
-d '{"model":"gpt-3.5-turbo-1106","temperature":0,"messages":[{"role":"system","content":"You are a professional, authentic translation engine, only returns translations."},{"role":"user","content":"Translate the text to Simplified Chinese Language, please do not explain my original text.:\\n\\nHello world"}]}'
```
which returns:
```
HTTP/1.1 403 Forbidden\r
Date: Fri, 09 Feb 2024 09:15:22 GMT\r
Content-Length: 0\r
\r
```
Ollama server log:
```
[GIN] 2024/02/09 - 09:21:34 | 403 | 14.458µs | 172.19.0.1 | POST "/v1/chat/completions"
```
|
{
"login": "wizd",
"id": 2835415,
"node_id": "MDQ6VXNlcjI4MzU0MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2835415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wizd",
"html_url": "https://github.com/wizd",
"followers_url": "https://api.github.com/users/wizd/followers",
"following_url": "https://api.github.com/users/wizd/following{/other_user}",
"gists_url": "https://api.github.com/users/wizd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wizd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wizd/subscriptions",
"organizations_url": "https://api.github.com/users/wizd/orgs",
"repos_url": "https://api.github.com/users/wizd/repos",
"events_url": "https://api.github.com/users/wizd/events{/privacy}",
"received_events_url": "https://api.github.com/users/wizd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2425/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2505
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2505/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2505/comments
|
https://api.github.com/repos/ollama/ollama/issues/2505/events
|
https://github.com/ollama/ollama/issues/2505
| 2,135,344,748
|
I_kwDOJ0Z1Ps5_RsZs
| 2,505
|
How do I specify parameters when launching ollama from command line?
|
{
"login": "dtp555-1212",
"id": 13024057,
"node_id": "MDQ6VXNlcjEzMDI0MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/13024057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dtp555-1212",
"html_url": "https://github.com/dtp555-1212",
"followers_url": "https://api.github.com/users/dtp555-1212/followers",
"following_url": "https://api.github.com/users/dtp555-1212/following{/other_user}",
"gists_url": "https://api.github.com/users/dtp555-1212/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dtp555-1212/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dtp555-1212/subscriptions",
"organizations_url": "https://api.github.com/users/dtp555-1212/orgs",
"repos_url": "https://api.github.com/users/dtp555-1212/repos",
"events_url": "https://api.github.com/users/dtp555-1212/events{/privacy}",
"received_events_url": "https://api.github.com/users/dtp555-1212/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-02-14T23:02:07
| 2024-12-09T00:48:07
| 2024-02-15T06:19:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I saw something online that said to try ollama run llama2:13b -temperature 0.0 but that does not work. I am also interested in setting the seed, so rerunning will do the same process rather than doing something different each time. (e.g. on a classification task, sometimes it says valid/invalid, sometimes is says correct/incorrect. sometimes is it very verbose explaining why it made its decision. I want to find a terse method and stick with it.
Thanks in advance
|
{
"login": "dtp555-1212",
"id": 13024057,
"node_id": "MDQ6VXNlcjEzMDI0MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/13024057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dtp555-1212",
"html_url": "https://github.com/dtp555-1212",
"followers_url": "https://api.github.com/users/dtp555-1212/followers",
"following_url": "https://api.github.com/users/dtp555-1212/following{/other_user}",
"gists_url": "https://api.github.com/users/dtp555-1212/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dtp555-1212/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dtp555-1212/subscriptions",
"organizations_url": "https://api.github.com/users/dtp555-1212/orgs",
"repos_url": "https://api.github.com/users/dtp555-1212/repos",
"events_url": "https://api.github.com/users/dtp555-1212/events{/privacy}",
"received_events_url": "https://api.github.com/users/dtp555-1212/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2505/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/2505/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2542
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2542/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2542/comments
|
https://api.github.com/repos/ollama/ollama/issues/2542/events
|
https://github.com/ollama/ollama/pull/2542
| 2,138,980,436
|
PR_kwDOJ0Z1Ps5nG3H2
| 2,542
|
fix: chat system prompting overrides
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-16T16:44:11
| 2024-02-17T16:36:44
| 2024-02-16T19:42:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2542",
"html_url": "https://github.com/ollama/ollama/pull/2542",
"diff_url": "https://github.com/ollama/ollama/pull/2542.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2542.patch",
"merged_at": "2024-02-16T19:42:43"
}
|
This change fixes two more system message related issues with the CLI and message templates.
- When `/set system ...` is run multiple times in the CLI, use only the most recent system message rather than adding multiple system messages to the history.
- Do not add the model's default message as a first message when a new system message is specified.
- When a request was made to a model than inherits from the currently loaded model the system and template were not updated in the /chat endpoint. The fix is to use the requested model rather than the loaded one.
Previous behavior, when running a model and setting a new system message:
```
ollama run phi
>>> /set system you are mario
Set system message.
>>> hi
```
```
level=DEBUG source=routes.go:1205 msg="chat handler" prompt="System: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful answers to the user's questions.\nUser: \nAssistant:System: you are mario\nUser: hi\nAssistant:"
```
New behavior:
```
level=DEBUG source=routes.go:1205 msg="chat handler" prompt="System: you are mario\nUser: hi\nAssistant:"
```
resolves #2492
Follow up: This keep the "system message history" further testing on model behavior of this is needed, it could be better to just override the system message, and not keep the old system message in the history.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2542/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3264
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3264/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3264/comments
|
https://api.github.com/repos/ollama/ollama/issues/3264/events
|
https://github.com/ollama/ollama/issues/3264
| 2,196,937,196
|
I_kwDOJ0Z1Ps6C8pns
| 3,264
|
"CUDA error: out of memory" after random number of API requests
|
{
"login": "RandomGitUser321",
"id": 27916165,
"node_id": "MDQ6VXNlcjI3OTE2MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/27916165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RandomGitUser321",
"html_url": "https://github.com/RandomGitUser321",
"followers_url": "https://api.github.com/users/RandomGitUser321/followers",
"following_url": "https://api.github.com/users/RandomGitUser321/following{/other_user}",
"gists_url": "https://api.github.com/users/RandomGitUser321/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RandomGitUser321/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RandomGitUser321/subscriptions",
"organizations_url": "https://api.github.com/users/RandomGitUser321/orgs",
"repos_url": "https://api.github.com/users/RandomGitUser321/repos",
"events_url": "https://api.github.com/users/RandomGitUser321/events{/privacy}",
"received_events_url": "https://api.github.com/users/RandomGitUser321/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-03-20T08:26:16
| 2024-06-22T00:02:38
| 2024-06-22T00:02:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I run a workflow in ComfyUI that makes calls to Ollama server's API to generate prompts or analyze images. It works fine, normally, but occasionally I get CUDA errors that then make me have to restart the server. It's kind of disruptive to my workflow because I have to check back every 5-10 minutes to make sure a queued list isn't being stalled.
Within the API call, I use `keep_alive="0"` because otherwise I run into issues with generating an image right after (stable diffusion that needs a lot of VRAM) and sometimes parts of either model get stuck in shared memory. The command works fine and unloads the LLM from the VRAM. I think it will persist in system ram afterwards, which is also fine, since it's faster to reload ram->vram, than it is to have to reopen the whole model off the drive again.
The basic flow of what I'm doing is send a request to Ollama->get response->unload LLM from VRAM->use response for stable diffusion->new seed for Ollama->rinse and repeat. ComfyUI is also set to unload models back to system ram as well.
I added a time.sleep() to my node that sends the requests to Ollama thinking maybe it just needs a little more time for the unloading phase.
### What did you expect to see?
I'd expect it not to have this error
### Steps to reproduce
I should also note that my call looks like this:
```
time.sleep(2) #attempting to see if this helps solve the problem
response = client.generate(model=model, prompt=prompt, system=system, options={'num_predict': num_predict, 'temperature': temperature, 'seed': seed}, keep_alive="0")
```
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
_No response_
### Platform
_No response_
### Ollama version
v0.1.29
### GPU
Nvidia
### GPU info
RTX 2080 with latest Nvidia drivers.
### CPU
Intel
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3264/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1915
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1915/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1915/comments
|
https://api.github.com/repos/ollama/ollama/issues/1915/events
|
https://github.com/ollama/ollama/pull/1915
| 2,075,481,288
|
PR_kwDOJ0Z1Ps5jvoP4
| 1,915
|
Bump llama.cpp to b1842 and add new cuda lib dep
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-01-11T00:48:36
| 2024-01-16T21:36:52
| 2024-01-16T21:36:49
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1915",
"html_url": "https://github.com/ollama/ollama/pull/1915",
"diff_url": "https://github.com/ollama/ollama/pull/1915.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1915.patch",
"merged_at": "2024-01-16T21:36:49"
}
|
Upstream llama.cpp has added a new dependency with the NVIDIA CUDA Driver Libraries (libcuda.so) which is part of the driver distribution, not the general cuda libraries, and is not available as an archive, so we can not statically link it. This may introduce some additional compatibility challenges which we'll need to keep an eye on.
Marking draft until we can test on more driver/cuda version combinations to ensure this doesn't cause compatibility problems.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1915/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1915/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1338
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1338/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1338/comments
|
https://api.github.com/repos/ollama/ollama/issues/1338/events
|
https://github.com/ollama/ollama/issues/1338
| 2,019,872,318
|
I_kwDOJ0Z1Ps54ZM4-
| 1,338
|
response with forever loop <s>
|
{
"login": "yangboz",
"id": 481954,
"node_id": "MDQ6VXNlcjQ4MTk1NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/481954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangboz",
"html_url": "https://github.com/yangboz",
"followers_url": "https://api.github.com/users/yangboz/followers",
"following_url": "https://api.github.com/users/yangboz/following{/other_user}",
"gists_url": "https://api.github.com/users/yangboz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangboz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangboz/subscriptions",
"organizations_url": "https://api.github.com/users/yangboz/orgs",
"repos_url": "https://api.github.com/users/yangboz/repos",
"events_url": "https://api.github.com/users/yangboz/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangboz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2023-12-01T01:39:12
| 2024-03-12T20:26:20
| 2024-03-12T20:26:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
when testing llama2 or other models pulled from https://ollama.ai/library
after successfully running the model mix languages , we can see the "<s>" sometime displayed on console forever results to blank loop forever.
any idea ?
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1338/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5263
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5263/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5263/comments
|
https://api.github.com/repos/ollama/ollama/issues/5263/events
|
https://github.com/ollama/ollama/issues/5263
| 2,371,487,482
|
I_kwDOJ0Z1Ps6NWgb6
| 5,263
|
Add a parameter to prohibit adding services to ` systemictl '`
|
{
"login": "wszgrcy",
"id": 9607121,
"node_id": "MDQ6VXNlcjk2MDcxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9607121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wszgrcy",
"html_url": "https://github.com/wszgrcy",
"followers_url": "https://api.github.com/users/wszgrcy/followers",
"following_url": "https://api.github.com/users/wszgrcy/following{/other_user}",
"gists_url": "https://api.github.com/users/wszgrcy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wszgrcy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wszgrcy/subscriptions",
"organizations_url": "https://api.github.com/users/wszgrcy/orgs",
"repos_url": "https://api.github.com/users/wszgrcy/repos",
"events_url": "https://api.github.com/users/wszgrcy/events{/privacy}",
"received_events_url": "https://api.github.com/users/wszgrcy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-25T02:23:22
| 2024-06-25T02:23:52
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
In Linux, sometimes it is necessary to manually call for some debugging
But the saving location of the model is different from the location where the service is automatically started when manually started
And sometimes I don't want to automatically start taking up space
So can we` https://ollama.com/install.sh `Add a parameter in the script to prohibit configuration in 'systemid'
https://github.com/ollama/ollama/blob/ccef9431c8aae4ecfd0eec6e10377d09cb42f634/scripts/install.sh#L132-L134
like
```shell
if available systemctl && $DISABLE_SYSTEMD; then
configure_systemd
fi
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5263/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4123
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4123/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4123/comments
|
https://api.github.com/repos/ollama/ollama/issues/4123/events
|
https://github.com/ollama/ollama/pull/4123
| 2,277,317,785
|
PR_kwDOJ0Z1Ps5udF50
| 4,123
|
Feat: Add `OLLAMA_LOAD_TIMEOUT` env variable
|
{
"login": "dcfidalgo",
"id": 15979778,
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcfidalgo",
"html_url": "https://github.com/dcfidalgo",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-05-03T09:47:50
| 2024-05-24T05:57:14
| 2024-05-23T21:10:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4123",
"html_url": "https://github.com/ollama/ollama/pull/4123",
"diff_url": "https://github.com/ollama/ollama/pull/4123.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4123.patch",
"merged_at": null
}
|
Closes #3940
For certain hardware setups and models, the offloading to the GPU can take a lot of time and the user can hit a timeout. This PR makes the timeout configurable via the `OLLAMA_LOAD_TIMEOUT` env variable, to be provided in seconds.
@dhiltgen I added a subsection in the FAQ, since I was not sure where to document the env variable. Let me know if this is the right place.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4123/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1677
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1677/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1677/comments
|
https://api.github.com/repos/ollama/ollama/issues/1677/events
|
https://github.com/ollama/ollama/pull/1677
| 2,054,299,892
|
PR_kwDOJ0Z1Ps5irBgZ
| 1,677
|
update where are models stored q
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-22T17:49:12
| 2023-12-22T17:56:29
| 2023-12-22T17:56:28
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1677",
"html_url": "https://github.com/ollama/ollama/pull/1677",
"diff_url": "https://github.com/ollama/ollama/pull/1677.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1677.patch",
"merged_at": "2023-12-22T17:56:28"
}
| null |
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1677/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6533
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6533/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6533/comments
|
https://api.github.com/repos/ollama/ollama/issues/6533/events
|
https://github.com/ollama/ollama/issues/6533
| 2,490,447,813
|
I_kwDOJ0Z1Ps6UcTfF
| 6,533
|
/api/embeddings returning 404
|
{
"login": "jwstanwick",
"id": 48192612,
"node_id": "MDQ6VXNlcjQ4MTkyNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/48192612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jwstanwick",
"html_url": "https://github.com/jwstanwick",
"followers_url": "https://api.github.com/users/jwstanwick/followers",
"following_url": "https://api.github.com/users/jwstanwick/following{/other_user}",
"gists_url": "https://api.github.com/users/jwstanwick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jwstanwick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jwstanwick/subscriptions",
"organizations_url": "https://api.github.com/users/jwstanwick/orgs",
"repos_url": "https://api.github.com/users/jwstanwick/repos",
"events_url": "https://api.github.com/users/jwstanwick/events{/privacy}",
"received_events_url": "https://api.github.com/users/jwstanwick/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-27T22:02:38
| 2024-08-28T20:42:08
| 2024-08-28T20:42:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am on an M3 Mac. I am running Ollama using the installer, not on docker. When running `curl localhost:11434/api/embeddings`, Ollama returns `404 page not found`. Other api calls such as `pull` and `show` work as intended. The output from my Ollama logs is as follows:
```
[GIN] 2024/08/27 - 17:57:12 | 404 | 24.875µs | 127.0.0.1 | GET "/api/embeddings"
[GIN] 2024/08/27 - 17:57:21 | 404 | 19.875µs | 127.0.0.1 | GET "/api/embeddings"
[GIN] 2024/08/27 - 17:57:32 | 404 | 18.958µs | 127.0.0.1 | GET "/api/embeddings"
```
I have tried restarting Ollama, restarting my pc, and the other normal "turn it off and on again" checks. I am completely stumped! Any help is appreciated.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.7
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6533/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6533/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1409
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1409/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1409/comments
|
https://api.github.com/repos/ollama/ollama/issues/1409/events
|
https://github.com/ollama/ollama/pull/1409
| 2,029,511,517
|
PR_kwDOJ0Z1Ps5hWzWq
| 1,409
|
Simple chat example
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-06T22:36:33
| 2023-12-06T23:49:46
| 2023-12-06T23:49:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1409",
"html_url": "https://github.com/ollama/ollama/pull/1409",
"diff_url": "https://github.com/ollama/ollama/pull/1409.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1409.patch",
"merged_at": "2023-12-06T23:49:46"
}
|
Simple example using Bruce's chat endpoint
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1409/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1931
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1931/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1931/comments
|
https://api.github.com/repos/ollama/ollama/issues/1931/events
|
https://github.com/ollama/ollama/pull/1931
| 2,077,452,237
|
PR_kwDOJ0Z1Ps5j2c5K
| 1,931
|
Add semantic kernel to Readme
|
{
"login": "eavanvalkenburg",
"id": 13749212,
"node_id": "MDQ6VXNlcjEzNzQ5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/13749212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eavanvalkenburg",
"html_url": "https://github.com/eavanvalkenburg",
"followers_url": "https://api.github.com/users/eavanvalkenburg/followers",
"following_url": "https://api.github.com/users/eavanvalkenburg/following{/other_user}",
"gists_url": "https://api.github.com/users/eavanvalkenburg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eavanvalkenburg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eavanvalkenburg/subscriptions",
"organizations_url": "https://api.github.com/users/eavanvalkenburg/orgs",
"repos_url": "https://api.github.com/users/eavanvalkenburg/repos",
"events_url": "https://api.github.com/users/eavanvalkenburg/events{/privacy}",
"received_events_url": "https://api.github.com/users/eavanvalkenburg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-11T19:37:14
| 2024-01-11T19:45:05
| 2024-01-11T19:40:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1931",
"html_url": "https://github.com/ollama/ollama/pull/1931",
"diff_url": "https://github.com/ollama/ollama/pull/1931.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1931.patch",
"merged_at": "2024-01-11T19:40:24"
}
|
We just released support for Ollama in the Python version of Semantic Kernel, this links directly there. Would love to move this to a package approach instead of using a http request, but that can be done once your work on that is completed as mentioned here #1857.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1931/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1931/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/440
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/440/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/440/comments
|
https://api.github.com/repos/ollama/ollama/issues/440/events
|
https://github.com/ollama/ollama/pull/440
| 1,871,700,580
|
PR_kwDOJ0Z1Ps5ZCY-U
| 440
|
build: add Docker Compose file and service for running Ollama with Do…
|
{
"login": "blogbin",
"id": 1687732,
"node_id": "MDQ6VXNlcjE2ODc3MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1687732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blogbin",
"html_url": "https://github.com/blogbin",
"followers_url": "https://api.github.com/users/blogbin/followers",
"following_url": "https://api.github.com/users/blogbin/following{/other_user}",
"gists_url": "https://api.github.com/users/blogbin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blogbin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blogbin/subscriptions",
"organizations_url": "https://api.github.com/users/blogbin/orgs",
"repos_url": "https://api.github.com/users/blogbin/repos",
"events_url": "https://api.github.com/users/blogbin/events{/privacy}",
"received_events_url": "https://api.github.com/users/blogbin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-08-29T13:38:50
| 2023-11-29T21:22:41
| 2023-11-29T21:22:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/440",
"html_url": "https://github.com/ollama/ollama/pull/440",
"diff_url": "https://github.com/ollama/ollama/pull/440.diff",
"patch_url": "https://github.com/ollama/ollama/pull/440.patch",
"merged_at": null
}
|
- Add Docker Compose file for running Ollama with Docker
- Create a new file `docker-compose.yaml`
- Define the `ollama` service in the Docker Compose file
- Build the image and set the image name to `jmorganca/ollama`
- Mount the `runtime/ollama` directory to `/home/ollama` in the container
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/440/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1909
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1909/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1909/comments
|
https://api.github.com/repos/ollama/ollama/issues/1909/events
|
https://github.com/ollama/ollama/pull/1909
| 2,075,273,117
|
PR_kwDOJ0Z1Ps5ju6eP
| 1,909
|
Adds `HEALTHCHECK` to `Dockerfile`
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-01-10T21:50:43
| 2024-10-22T19:56:13
| 2024-02-20T02:53:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1909",
"html_url": "https://github.com/ollama/ollama/pull/1909",
"diff_url": "https://github.com/ollama/ollama/pull/1909.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1909.patch",
"merged_at": null
}
|
Adds `HEALTHCHECK` to the `Dockerfile` for a fully functioning status
- Confirmed proper check in https://github.com/jmorganca/ollama/issues/1378
- Enables the below (meaningful and continually updated STATUS)
```bash
> docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama def456
abc123
> docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abc123 def456 "/bin/ollama serve" 8 seconds ago Up 7 seconds (healthy) 0.0.0.0:11434->11434/tcp ollama
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1909/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4191
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4191/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4191/comments
|
https://api.github.com/repos/ollama/ollama/issues/4191/events
|
https://github.com/ollama/ollama/issues/4191
| 2,279,901,185
|
I_kwDOJ0Z1Ps6H5IgB
| 4,191
|
applications on Windows
|
{
"login": "win10ogod",
"id": 125795763,
"node_id": "U_kgDOB399sw",
"avatar_url": "https://avatars.githubusercontent.com/u/125795763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/win10ogod",
"html_url": "https://github.com/win10ogod",
"followers_url": "https://api.github.com/users/win10ogod/followers",
"following_url": "https://api.github.com/users/win10ogod/following{/other_user}",
"gists_url": "https://api.github.com/users/win10ogod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/win10ogod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/win10ogod/subscriptions",
"organizations_url": "https://api.github.com/users/win10ogod/orgs",
"repos_url": "https://api.github.com/users/win10ogod/repos",
"events_url": "https://api.github.com/users/win10ogod/events{/privacy}",
"received_events_url": "https://api.github.com/users/win10ogod/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-06T02:26:01
| 2024-05-06T22:53:37
| 2024-05-06T22:53:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can applications on Windows be updated to the latest version?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4191/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6510
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6510/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6510/comments
|
https://api.github.com/repos/ollama/ollama/issues/6510/events
|
https://github.com/ollama/ollama/issues/6510
| 2,486,301,879
|
I_kwDOJ0Z1Ps6UMfS3
| 6,510
|
Performing GET request to registry.ollama.ai/v2/ returns 404 page not found
|
{
"login": "yeahdongcn",
"id": 2831050,
"node_id": "MDQ6VXNlcjI4MzEwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeahdongcn",
"html_url": "https://github.com/yeahdongcn",
"followers_url": "https://api.github.com/users/yeahdongcn/followers",
"following_url": "https://api.github.com/users/yeahdongcn/following{/other_user}",
"gists_url": "https://api.github.com/users/yeahdongcn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeahdongcn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeahdongcn/subscriptions",
"organizations_url": "https://api.github.com/users/yeahdongcn/orgs",
"repos_url": "https://api.github.com/users/yeahdongcn/repos",
"events_url": "https://api.github.com/users/yeahdongcn/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeahdongcn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 3
| 2024-08-26T08:45:02
| 2024-08-26T10:59:22
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Background:
Kubernetes 1.31 introduced a new feature: [Read-Only Volumes Based on OCI Artifacts](https://kubernetes.io/blog/2024/08/16/kubernetes-1-31-image-volume-source/). I believe this feature could be very useful for deploying a dedicated model alongside Ollama in Kubernetes.
The currently supported container runtime is [CRI-O](https://github.com/cri-o/cri-o), which relies on [containers/image](https://github.com/containers/image) for all image-related operations. It uses a `GET` request to the following URL to [determine](https://github.com/containers/image/blob/main/docker/docker_client.go#L903) the appropriate schema: e.g. https://registry.ollama.ai/v2/.
I hardcoded the schema to `HTTPS` and used the Ollama image `registry.ollama.ai/library/tinyllama:latest` as the OCI image volume. After making some modifications to the modules consumed by CRI-O, I was able to get the pod and container running without any issues.
Please see the following logs:
```bash
❯ sudo crictl --timeout=200s --runtime-endpoint unix:///run/crio/crio.sock run ./container.json ./sandbox_config.json
INFO[0005] Pulling container image: registry.docker.com/ollama/ollama:latest
INFO[0005] Pulling image registry.ollama.ai/library/tinyllama:latest to be mounted to container path: /volume
7e437894449f6429799cc5ef236c4a4570a69e3769bf324bbf700045e383cae8
❯ sudo crictl --timeout=200s --runtime-endpoint unix:///run/crio/crio.sock ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
7e437894449f6 registry.docker.com/ollama/ollama:latest 8 seconds ago Running podsandbox-sleep 0 4d1766fdf286b unknown
❯ sudo crictl --timeout=200s --runtime-endpoint unix:///run/crio/crio.sock exec -it 7e437894449f6 bash
root@crictl_host:/# cd volume/
root@crictl_host:/volume# ls -l
total 622772
-rw-r--r-- 1 root root 637699456 Aug 26 08:32 model
-rw-r--r-- 1 root root 98 Aug 26 08:32 params
-rw-r--r-- 1 root root 31 Aug 26 08:32 system
-rw-r--r-- 1 root root 70 Aug 26 08:32 template
root@crictl_host:/volume#
```
I'm wondering if the Ollama model registry could be slightly updated to handle requests to `registry.ollama.ai/v2/`. This would allow certain container runtimes to seamlessly integrate Ollama's OCI models without any issues.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6510/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6794
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6794/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6794/comments
|
https://api.github.com/repos/ollama/ollama/issues/6794/events
|
https://github.com/ollama/ollama/issues/6794
| 2,524,741,840
|
I_kwDOJ0Z1Ps6WfIDQ
| 6,794
|
Wrong response at math question!
|
{
"login": "lsalamon",
"id": 235938,
"node_id": "MDQ6VXNlcjIzNTkzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/235938?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsalamon",
"html_url": "https://github.com/lsalamon",
"followers_url": "https://api.github.com/users/lsalamon/followers",
"following_url": "https://api.github.com/users/lsalamon/following{/other_user}",
"gists_url": "https://api.github.com/users/lsalamon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lsalamon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsalamon/subscriptions",
"organizations_url": "https://api.github.com/users/lsalamon/orgs",
"repos_url": "https://api.github.com/users/lsalamon/repos",
"events_url": "https://api.github.com/users/lsalamon/events{/privacy}",
"received_events_url": "https://api.github.com/users/lsalamon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-09-13T12:36:50
| 2024-09-18T20:17:06
| 2024-09-17T17:57:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I try this question for llama 3.1:8b and he goes in hallucination at response:
Can you explain why this mathematical equality is true: (SQR(2)*2 = (SQR(2))^3
### OS Windows
### GPU none
### CPU AMD Ryzen 9 5900X 12-Core Processor
### 32 Gb memory
### Ollama version ollama version is 0.3.10
Obs.:
I asked the same question for the solar model and the answer was wrong. There was no hallucination.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6794/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7731
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7731/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7731/comments
|
https://api.github.com/repos/ollama/ollama/issues/7731/events
|
https://github.com/ollama/ollama/pull/7731
| 2,670,502,348
|
PR_kwDOJ0Z1Ps6CU1_h
| 7,731
|
update the docs
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-11-19T02:16:08
| 2024-11-19T05:45:15
| 2024-11-19T05:17:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7731",
"html_url": "https://github.com/ollama/ollama/pull/7731",
"diff_url": "https://github.com/ollama/ollama/pull/7731.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7731.patch",
"merged_at": "2024-11-19T05:17:38"
}
|
Update the API docs with:
* how to quantize a model
* change "name" to "model"
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7731/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1582
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1582/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1582/comments
|
https://api.github.com/repos/ollama/ollama/issues/1582/events
|
https://github.com/ollama/ollama/issues/1582
| 2,046,680,361
|
I_kwDOJ0Z1Ps55_d0p
| 1,582
|
ollama crashes when calling /api/generate with invalid duration message
|
{
"login": "michaelgloeckner",
"id": 56082327,
"node_id": "MDQ6VXNlcjU2MDgyMzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/56082327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelgloeckner",
"html_url": "https://github.com/michaelgloeckner",
"followers_url": "https://api.github.com/users/michaelgloeckner/followers",
"following_url": "https://api.github.com/users/michaelgloeckner/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelgloeckner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelgloeckner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelgloeckner/subscriptions",
"organizations_url": "https://api.github.com/users/michaelgloeckner/orgs",
"repos_url": "https://api.github.com/users/michaelgloeckner/repos",
"events_url": "https://api.github.com/users/michaelgloeckner/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelgloeckner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2023-12-18T13:35:09
| 2023-12-20T09:31:10
| 2023-12-20T09:31:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
i run ollama in k8s cluster and i upgraded from 0.1.9 to 0.1.16 to get mixtral fix.
the error occurred first time with version 0.1.14.
But when i call /api/generate ollama stops.
Looking into ollama logs i see the following messages:
panic: time: invalid duration "-6414107897391086.000000ms"
More logs are attached:
[error_ollama_0.1.16.txt](https://github.com/jmorganca/ollama/files/13704366/error_ollama_0.1.16.txt)
|
{
"login": "michaelgloeckner",
"id": 56082327,
"node_id": "MDQ6VXNlcjU2MDgyMzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/56082327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelgloeckner",
"html_url": "https://github.com/michaelgloeckner",
"followers_url": "https://api.github.com/users/michaelgloeckner/followers",
"following_url": "https://api.github.com/users/michaelgloeckner/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelgloeckner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelgloeckner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelgloeckner/subscriptions",
"organizations_url": "https://api.github.com/users/michaelgloeckner/orgs",
"repos_url": "https://api.github.com/users/michaelgloeckner/repos",
"events_url": "https://api.github.com/users/michaelgloeckner/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelgloeckner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1582/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5836
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5836/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5836/comments
|
https://api.github.com/repos/ollama/ollama/issues/5836/events
|
https://github.com/ollama/ollama/issues/5836
| 2,421,680,280
|
I_kwDOJ0Z1Ps6QV-iY
| 5,836
|
Add restrictive license indicator
|
{
"login": "Darin755",
"id": 54958995,
"node_id": "MDQ6VXNlcjU0OTU4OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/54958995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darin755",
"html_url": "https://github.com/Darin755",
"followers_url": "https://api.github.com/users/Darin755/followers",
"following_url": "https://api.github.com/users/Darin755/following{/other_user}",
"gists_url": "https://api.github.com/users/Darin755/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Darin755/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darin755/subscriptions",
"organizations_url": "https://api.github.com/users/Darin755/orgs",
"repos_url": "https://api.github.com/users/Darin755/repos",
"events_url": "https://api.github.com/users/Darin755/events{/privacy}",
"received_events_url": "https://api.github.com/users/Darin755/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-07-22T00:32:55
| 2024-07-22T00:32:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have noticed that some models such as llama 3 and gemma have restrictive licenses that add arbitrary limitations. I think the licensing of models is not clear yet but models that do not allow use, distribution and modification for any purpose should be labeled with a red restrictive license indicator. This will make it easier to avoid models that have requirements such as keeping it up to date at all times. It also will avoid legal issues for users who may violate the license terms without using it.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5836/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3774
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3774/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3774/comments
|
https://api.github.com/repos/ollama/ollama/issues/3774/events
|
https://github.com/ollama/ollama/issues/3774
| 2,254,500,440
|
I_kwDOJ0Z1Ps6GYPJY
| 3,774
|
Error: llama runner process no longer running: 3221225785
|
{
"login": "pheonixravi",
"id": 10174848,
"node_id": "MDQ6VXNlcjEwMTc0ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10174848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pheonixravi",
"html_url": "https://github.com/pheonixravi",
"followers_url": "https://api.github.com/users/pheonixravi/followers",
"following_url": "https://api.github.com/users/pheonixravi/following{/other_user}",
"gists_url": "https://api.github.com/users/pheonixravi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pheonixravi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pheonixravi/subscriptions",
"organizations_url": "https://api.github.com/users/pheonixravi/orgs",
"repos_url": "https://api.github.com/users/pheonixravi/repos",
"events_url": "https://api.github.com/users/pheonixravi/events{/privacy}",
"received_events_url": "https://api.github.com/users/pheonixravi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 29
| 2024-04-20T10:57:10
| 2024-08-06T16:50:08
| 2024-05-07T15:48:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[server.log](https://github.com/ollama/ollama/files/15047891/server.log)
### Unable to run mistral or any other modal locally using ollama
C:\Users\ravik>ollama list
NAME ID SIZE MODIFIED
mistral:latest 61e88e884507 4.1 GB About an hour ago
C:\Users\ravik>ollama run mistral
Error: llama runner process no longer running: 3221225785
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3774/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1379
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1379/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1379/comments
|
https://api.github.com/repos/ollama/ollama/issues/1379/events
|
https://github.com/ollama/ollama/pull/1379
| 2,024,707,757
|
PR_kwDOJ0Z1Ps5hGZxT
| 1,379
|
Added `docker-compose.yaml`
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-12-04T20:48:36
| 2025-01-21T00:16:13
| 2025-01-21T00:16:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1379",
"html_url": "https://github.com/ollama/ollama/pull/1379",
"diff_url": "https://github.com/ollama/ollama/pull/1379.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1379.patch",
"merged_at": null
}
|
Revives and improves https://github.com/jmorganca/ollama/pull/440 to close https://github.com/jmorganca/ollama/issues/546.
|
{
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1379/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2851
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2851/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2851/comments
|
https://api.github.com/repos/ollama/ollama/issues/2851/events
|
https://github.com/ollama/ollama/issues/2851
| 2,162,480,915
|
I_kwDOJ0Z1Ps6A5NcT
| 2,851
|
Troubleshooting Dify Connection to Ollama Service: CPU vs. GPU Differences
|
{
"login": "xiaotianfotos",
"id": 25025807,
"node_id": "MDQ6VXNlcjI1MDI1ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/25025807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaotianfotos",
"html_url": "https://github.com/xiaotianfotos",
"followers_url": "https://api.github.com/users/xiaotianfotos/followers",
"following_url": "https://api.github.com/users/xiaotianfotos/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaotianfotos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaotianfotos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaotianfotos/subscriptions",
"organizations_url": "https://api.github.com/users/xiaotianfotos/orgs",
"repos_url": "https://api.github.com/users/xiaotianfotos/repos",
"events_url": "https://api.github.com/users/xiaotianfotos/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaotianfotos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 14
| 2024-03-01T02:48:36
| 2024-04-24T00:54:20
| 2024-03-12T07:29:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
my problem is
i 'm using Dify to connect to ollama service
when using ollama API on Dify,always **loaded to CPU memory**,
but i tried using
curl http://localhost:11434/api/generate -d '{
"model": "qwen:14b",
"prompt": "Why is the sky blue?"
}'
**loaded into GPU**
log description
1st run :ollama API via Dify
2nd run : curl
i also tried using OpenAI-API-compatible on Dify,**loaded to GPU**
log as following
time=2024-03-01T10:34:19.796+08:00 level=INFO source=images.go:710 msg="total blobs: 5"
time=2024-03-01T10:34:19.796+08:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-03-01T10:34:19.797+08:00 level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.27)"
time=2024-03-01T10:34:19.797+08:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-03-01T10:34:22.664+08:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [rocm_v5 cpu_avx rocm_v6 cpu_avx2 cuda_v11 cpu]"
time=2024-03-01T10:34:22.664+08:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-03-01T10:34:22.664+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-01T10:34:22.666+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.535.154.05]"
time=2024-03-01T10:34:22.672+08:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-03-01T10:34:22.672+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-01T10:34:22.678+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
time=2024-03-01T10:34:38.704+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-01T10:34:38.704+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
time=2024-03-01T10:34:38.704+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-01T10:34:38.704+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
time=2024-03-01T10:34:38.704+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama1445230373/cuda_v11/libext_server.so
time=2024-03-01T10:34:38.715+08:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1445230373/cuda_v11/libext_server.so"
time=2024-03-01T10:34:38.715+08:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes
ggml_init_cublas: CUDA_USE_TENSOR_CORES: no
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
llama_model_loader: loaded meta data with 20 key-value pairs and 483 tensors from /home/liyy/.ollama/models/blobs/sha256:de0334402b975e19dd48eb43a13f7534772fb5b4a054447f8f6a861b87ec5799 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-beta-14B-Chat
llama_model_loader: - kv 2: qwen2.block_count u32 = 40
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 13696
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 40
llama_model_loader: - kv 8: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: qwen2.use_parallel_residual bool = true
llama_model_loader: - kv 10: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 12: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 13: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 15: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 17: tokenizer.chat_template str = {% for message in messages %}{{'<|im_...
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - kv 19: general.file_type u32 = 2
llama_model_loader: - type f32: 201 tensors
llama_model_loader: - type q4_0: 281 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 421/152064 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 40
llm_load_print_meta: n_layer = 40
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 5120
llm_load_print_meta: n_embd_v_gqa = 5120
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 13696
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 13B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 14.17 B
llm_load_print_meta: model size = 7.61 GiB (4.62 BPW)
llm_load_print_meta: general.name = Qwen2-beta-14B-Chat
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151643 '<|endoftext|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_tensors: ggml ctx size = 0.37 MiB
llm_load_tensors: offloading 1 repeating layers to GPU
llm_load_tensors: offloaded 1/41 layers to GPU
llm_load_tensors: CPU buffer size = 7794.73 MiB
llm_load_tensors: CUDA0 buffer size = 169.20 MiB
.........................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 1560.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 40.00 MiB
llama_new_context_with_model: KV self size = 1600.00 MiB, K (f16): 800.00 MiB, V (f16): 800.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 15.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 184.01 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 327.00 MiB
llama_new_context_with_model: graph splits (measure): 5
time=2024-03-01T10:34:42.057+08:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop"
[GIN] 2024/03/01 - 10:35:05 | 200 | 27.784462455s | 172.18.0.3 | POST "/api/chat"
time=2024-03-01T10:35:51.167+08:00 level=INFO source=routes.go:78 msg="changing loaded model"
time=2024-03-01T10:35:53.541+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-01T10:35:53.541+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
time=2024-03-01T10:35:53.541+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-01T10:35:53.541+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
time=2024-03-01T10:35:53.541+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama1445230373/cuda_v11/libext_server.so
time=2024-03-01T10:35:53.541+08:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1445230373/cuda_v11/libext_server.so"
time=2024-03-01T10:35:53.541+08:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 20 key-value pairs and 483 tensors from /home/liyy/.ollama/models/blobs/sha256:de0334402b975e19dd48eb43a13f7534772fb5b4a054447f8f6a861b87ec5799 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-beta-14B-Chat
llama_model_loader: - kv 2: qwen2.block_count u32 = 40
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 13696
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 40
llama_model_loader: - kv 8: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: qwen2.use_parallel_residual bool = true
llama_model_loader: - kv 10: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 12: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 13: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 15: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 17: tokenizer.chat_template str = {% for message in messages %}{{'<|im_...
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - kv 19: general.file_type u32 = 2
llama_model_loader: - type f32: 201 tensors
llama_model_loader: - type q4_0: 281 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 421/152064 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 40
llm_load_print_meta: n_layer = 40
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 5120
llm_load_print_meta: n_embd_v_gqa = 5120
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 13696
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 13B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 14.17 B
llm_load_print_meta: model size = 7.61 GiB (4.62 BPW)
llm_load_print_meta: general.name = Qwen2-beta-14B-Chat
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151643 '<|endoftext|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_tensors: ggml ctx size = 0.37 MiB
llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 41/41 layers to GPU
llm_load_tensors: CPU buffer size = 417.66 MiB
llm_load_tensors: CUDA0 buffer size = 7377.08 MiB
.........................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes
ggml_init_cublas: CUDA_USE_TENSOR_CORES: no
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
llama_kv_cache_init: CUDA0 KV buffer size = 1600.00 MiB
llama_new_context_with_model: KV self size = 1600.00 MiB, K (f16): 800.00 MiB, V (f16): 800.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 15.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 307.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 10.00 MiB
llama_new_context_with_model: graph splits (measure): 3
time=2024-03-01T10:35:55.605+08:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop"
[GIN] 2024/03/01 - 10:35:58 | 200 | 6.918120565s | ::1 | POST "/api/generate"
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2851/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1246
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1246/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1246/comments
|
https://api.github.com/repos/ollama/ollama/issues/1246/events
|
https://github.com/ollama/ollama/issues/1246
| 2,007,033,650
|
I_kwDOJ0Z1Ps53oOcy
| 1,246
|
Status endpoint needed
|
{
"login": "ex3ndr",
"id": 400659,
"node_id": "MDQ6VXNlcjQwMDY1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/400659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ex3ndr",
"html_url": "https://github.com/ex3ndr",
"followers_url": "https://api.github.com/users/ex3ndr/followers",
"following_url": "https://api.github.com/users/ex3ndr/following{/other_user}",
"gists_url": "https://api.github.com/users/ex3ndr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ex3ndr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ex3ndr/subscriptions",
"organizations_url": "https://api.github.com/users/ex3ndr/orgs",
"repos_url": "https://api.github.com/users/ex3ndr/repos",
"events_url": "https://api.github.com/users/ex3ndr/events{/privacy}",
"received_events_url": "https://api.github.com/users/ex3ndr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 2
| 2023-11-22T19:58:39
| 2024-11-06T19:05:16
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello!
I found a non-urgent issues in the API that makes UX much worse when working with models from web or with remote servers because we can't see current state of a ollama: is it downloading model? did it fail downloading model? is it doing inference? how much RAM/VRAM is used? Also lack of such status endpoint it is not clear what to do if connection was aborted during pull - how to check the status of pull operation?
Lack of this endpoint ends up in a weird UI in most projects i have seen so far.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1246/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1246/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/430
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/430/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/430/comments
|
https://api.github.com/repos/ollama/ollama/issues/430/events
|
https://github.com/ollama/ollama/issues/430
| 1,868,470,704
|
I_kwDOJ0Z1Ps5vXpmw
| 430
|
How to clear history without deleting the model?
|
{
"login": "TheGardenMan",
"id": 60105172,
"node_id": "MDQ6VXNlcjYwMTA1MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/60105172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheGardenMan",
"html_url": "https://github.com/TheGardenMan",
"followers_url": "https://api.github.com/users/TheGardenMan/followers",
"following_url": "https://api.github.com/users/TheGardenMan/following{/other_user}",
"gists_url": "https://api.github.com/users/TheGardenMan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheGardenMan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheGardenMan/subscriptions",
"organizations_url": "https://api.github.com/users/TheGardenMan/orgs",
"repos_url": "https://api.github.com/users/TheGardenMan/repos",
"events_url": "https://api.github.com/users/TheGardenMan/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheGardenMan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-08-27T12:02:25
| 2023-08-28T10:55:31
| 2023-08-28T10:55:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "TheGardenMan",
"id": 60105172,
"node_id": "MDQ6VXNlcjYwMTA1MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/60105172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheGardenMan",
"html_url": "https://github.com/TheGardenMan",
"followers_url": "https://api.github.com/users/TheGardenMan/followers",
"following_url": "https://api.github.com/users/TheGardenMan/following{/other_user}",
"gists_url": "https://api.github.com/users/TheGardenMan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheGardenMan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheGardenMan/subscriptions",
"organizations_url": "https://api.github.com/users/TheGardenMan/orgs",
"repos_url": "https://api.github.com/users/TheGardenMan/repos",
"events_url": "https://api.github.com/users/TheGardenMan/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheGardenMan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/430/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1277
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1277/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1277/comments
|
https://api.github.com/repos/ollama/ollama/issues/1277/events
|
https://github.com/ollama/ollama/issues/1277
| 2,010,952,610
|
I_kwDOJ0Z1Ps533LOi
| 1,277
|
Using Autogen with ollama (help wanted)
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2023-11-26T08:34:29
| 2024-02-14T22:16:35
| 2024-02-14T17:24:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I've been trying to use autogen with ollama.
To do this I've run
`
litellm --model ollama/alfred
`
which in theory is supposed to provide an openai api port that talks to ollama. (and seems to work)
My simple code to get started follows:
`
#import autogen
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json, OpenAIWrapper
client= OpenAIWrapper()
response = client.create(
config_list = [
{
"api_type": "open_ai",
"api_base": "http://127.0.0.1:8000",
"api_key": 'sk-1111111111111111111111111111111111111111',
'model' : 'alfred',
}
],
prompt="Hi",
)
print(response)
`
I've tried other sample code and basically nothing works. What am I doing wrong?
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1277/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1377
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1377/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1377/comments
|
https://api.github.com/repos/ollama/ollama/issues/1377/events
|
https://github.com/ollama/ollama/pull/1377
| 2,024,585,216
|
PR_kwDOJ0Z1Ps5hF-cY
| 1,377
|
update for qwen
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-04T19:38:17
| 2023-12-06T20:31:52
| 2023-12-06T20:31:51
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1377",
"html_url": "https://github.com/ollama/ollama/pull/1377",
"diff_url": "https://github.com/ollama/ollama/pull/1377.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1377.patch",
"merged_at": "2023-12-06T20:31:51"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1377/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2919
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2919/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2919/comments
|
https://api.github.com/repos/ollama/ollama/issues/2919/events
|
https://github.com/ollama/ollama/issues/2919
| 2,167,378,823
|
I_kwDOJ0Z1Ps6BL5OH
| 2,919
|
Loading model into memory instead of generating chat completion.
|
{
"login": "RapierXbox",
"id": 65401386,
"node_id": "MDQ6VXNlcjY1NDAxMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/65401386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RapierXbox",
"html_url": "https://github.com/RapierXbox",
"followers_url": "https://api.github.com/users/RapierXbox/followers",
"following_url": "https://api.github.com/users/RapierXbox/following{/other_user}",
"gists_url": "https://api.github.com/users/RapierXbox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RapierXbox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RapierXbox/subscriptions",
"organizations_url": "https://api.github.com/users/RapierXbox/orgs",
"repos_url": "https://api.github.com/users/RapierXbox/repos",
"events_url": "https://api.github.com/users/RapierXbox/events{/privacy}",
"received_events_url": "https://api.github.com/users/RapierXbox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-04T17:34:05
| 2024-03-04T18:53:17
| 2024-03-04T18:53:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, i am trying to do a chat completion with llama2 and it seems like its loading up a model instead of generating a chat completion. `{
"model": "llama2,
"message": [
{
"role": "user",
"content": "hello"
}
],
"stream": "false"
}` and its returning nothing.
|
{
"login": "RapierXbox",
"id": 65401386,
"node_id": "MDQ6VXNlcjY1NDAxMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/65401386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RapierXbox",
"html_url": "https://github.com/RapierXbox",
"followers_url": "https://api.github.com/users/RapierXbox/followers",
"following_url": "https://api.github.com/users/RapierXbox/following{/other_user}",
"gists_url": "https://api.github.com/users/RapierXbox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RapierXbox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RapierXbox/subscriptions",
"organizations_url": "https://api.github.com/users/RapierXbox/orgs",
"repos_url": "https://api.github.com/users/RapierXbox/repos",
"events_url": "https://api.github.com/users/RapierXbox/events{/privacy}",
"received_events_url": "https://api.github.com/users/RapierXbox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2919/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/221
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/221/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/221/comments
|
https://api.github.com/repos/ollama/ollama/issues/221/events
|
https://github.com/ollama/ollama/pull/221
| 1,822,975,727
|
PR_kwDOJ0Z1Ps5WeObQ
| 221
|
embed ggml-metal.metal
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-26T18:52:15
| 2023-07-28T00:24:43
| 2023-07-28T00:24:42
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/221",
"html_url": "https://github.com/ollama/ollama/pull/221",
"diff_url": "https://github.com/ollama/ollama/pull/221.diff",
"patch_url": "https://github.com/ollama/ollama/pull/221.patch",
"merged_at": "2023-07-28T00:24:42"
}
|
`go:embed ggml-metal.metal` and write it out to the right location on `init()` so llama.cpp can use it.
with this change, `ollama` is serveable using `go run . serve` or `go install . && ~/go/bin/ollama serve`
resolves #48
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/221/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3775
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3775/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3775/comments
|
https://api.github.com/repos/ollama/ollama/issues/3775/events
|
https://github.com/ollama/ollama/issues/3775
| 2,254,504,222
|
I_kwDOJ0Z1Ps6GYQEe
| 3,775
|
Achieving Deterministic Output with Ollama
|
{
"login": "antonkratz",
"id": 8510296,
"node_id": "MDQ6VXNlcjg1MTAyOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8510296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antonkratz",
"html_url": "https://github.com/antonkratz",
"followers_url": "https://api.github.com/users/antonkratz/followers",
"following_url": "https://api.github.com/users/antonkratz/following{/other_user}",
"gists_url": "https://api.github.com/users/antonkratz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antonkratz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antonkratz/subscriptions",
"organizations_url": "https://api.github.com/users/antonkratz/orgs",
"repos_url": "https://api.github.com/users/antonkratz/repos",
"events_url": "https://api.github.com/users/antonkratz/events{/privacy}",
"received_events_url": "https://api.github.com/users/antonkratz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-20T11:08:34
| 2024-05-14T23:26:08
| 2024-05-14T23:26:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
For a research project, I am interested in exploring the effect of different prompts. The problem is, when I change the prompt even slightly, and I get a different result, I am unable to say how much is because I changed the prompt input and how much is because of the random and pseudo-random effects because of concepts such as top-k, top-n and temperature.
Is it possible, in principle, to get a deterministic output? Is it technically possible to get a deterministic output in practice with ollama?
I am also thinking about things like multi-threading and things like RDRAND in Intel CPUs... but I do not know if RDRAND is actually used or if ollama uses a deterministic random number generating function where the seed could be fixed?!
Basically, I want to use ollama in a way that the same prompt generates the same output, at any temperature. There can and should be pseudo-randomness but it must be necessary for me to fix the seed. I want only changes that are caused by the prompt. Is that possible with ollama?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3775/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3775/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6492
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6492/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6492/comments
|
https://api.github.com/repos/ollama/ollama/issues/6492/events
|
https://github.com/ollama/ollama/issues/6492
| 2,484,795,357
|
I_kwDOJ0Z1Ps6UGvfd
| 6,492
|
Models drastically quality drop on `chat/completions` gateway
|
{
"login": "yaroslavyaroslav",
"id": 16612247,
"node_id": "MDQ6VXNlcjE2NjEyMjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/16612247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaroslavyaroslav",
"html_url": "https://github.com/yaroslavyaroslav",
"followers_url": "https://api.github.com/users/yaroslavyaroslav/followers",
"following_url": "https://api.github.com/users/yaroslavyaroslav/following{/other_user}",
"gists_url": "https://api.github.com/users/yaroslavyaroslav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaroslavyaroslav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaroslavyaroslav/subscriptions",
"organizations_url": "https://api.github.com/users/yaroslavyaroslav/orgs",
"repos_url": "https://api.github.com/users/yaroslavyaroslav/repos",
"events_url": "https://api.github.com/users/yaroslavyaroslav/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaroslavyaroslav/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-08-24T18:47:45
| 2024-10-29T11:46:39
| 2024-09-07T00:45:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Folks raised the following issue on my side (frontend for ollama) https://github.com/yaroslavyaroslav/OpenAI-sublime-text/issues/57
In short it's about that models response with very low quality through my app. Long story short.
1. I've ran the `export OLLAMA_DEBUG=1 && ollama serve`
2. run `ollama run qwen2:1.5b --verbose --nowordwrap` with the prompt from below and got quite fine answer.
3. then I run
```bash
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_LOCAL_API_KEY" \
-d '{
"model": "qwen2:1.5b",
"messages": [{"role": "user", "content": "create sublime text plugin that takes selected text and convert it by applying base64encoding and replacing selected text with the conversion"}]
}'
```
and got this mess:
```
{"id":"chatcmpl-521","object":"chat.completion","created":1724524318,"model":"qwen2:1.5b","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":"To create a Sublime Text plugin that converts selected text to base64 encoding, replaces it in place, and wraps the entire process in a command-line interface (CLI), you can use a combination of file handling and scripting. Here's how to implement this feature:\n\n1. Start by creating a new project in Sublime Text.\n2. Add the following package to your `Package Info` -\u003e `Packages/User/...` subfolder:\n```\n//sublime-text-commands\n{\n // Your custom CLI commands here\n}\n``` \n3. Copy and paste the contents of this code snippet into the newly created `.sublime-package` file:\n\n```json\n{\n \"name\": \"Custom CLI Commands\",\n \"description\": \"Commands for a Sublime Text plugin\",\n \"版本\": 1,\n \"dependencies: {\n // ... your npm packages here\n \"command-line-encoder\": \"^2.3.0\"\n },\n \"cmd\": [\n \"perl -e 'print Encode::b64_encode(\\$arg2));'\"\n ]\n}\n```\n\n4. Save the file and restart Sublime Text.\n\nNow, you should be able to see your plugin under the `\"commands\"` menu when launching the Sublime Text command palette:\n\n1. Choose `View-\u003e Find -\u003e Replace with` \u003e `\u003cPackage name\u003e`.\n2. Select any line in the current document that contains text.\n3. Press `Enter` to apply the above code.\n4. Choose an item from the output list on the right (you should see your plugin's CLI).\n\nTo convert selected text, use a combination of the following commands:\n\n- `\u003cPackage name\u003e`: Open the Sublime Text command palette and type in `\u003cPackage name\u003e`.\n - Then press `Enter`.\n - Check the box next to `\"Find\"` to search for any specific text.\n\nHere's an example of how you can do this with regular expressions and the new plugin:\n\n1. Add a custom regular expression to find any line that matches the input:\n- Search: `'(?s)^\\s+'\n- Replace: `'`\n - The `\\s+` captures one or more whitespace characters before any text.\n\n2. Press `Enter`.\n\n3. To replace selected text with base64 encoding and wrap it in a command prompt and press `Enter`. \n\n```json\n\"cmd\": [ \n \"perl -e 'print Encode::b64_encode($arg2);'\",\n \"(perl -e 'print Encode::b64_encode(\\$arg2));'\"\n]\n```\n\n4. Repeat the process by pressing `Enter` a few times for multiple lines."},"finish_reason":"stop"}],"usage":{"prompt_tokens":32,"completion_tokens":541,"total_tokens":573}}
```
On the server side I noticed that `ollama run` triggers another gateway than `chat/completions` and that the request appeared in logs are far greater than the one appeared on `curl` call.
Not that I dug this any deep enough but my shot is that there's some additional setup happening when calling `ollama run`.
here's the logs.
```log
2024/08/24 20:18:58 routes.go:1125: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/path-to-ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR:]"
time=2024-08-24T20:18:58.262+02:00 level=INFO source=images.go:782 msg="total blobs: 5"
time=2024-08-24T20:18:58.263+02:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-08-24T20:18:58.263+02:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
time=2024-08-24T20:18:58.267+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/gc/v8tx0lzx4qg7tt1rl88wzgwr0000gn/T/ollama4145360689/runners
time=2024-08-24T20:18:58.267+02:00 level=DEBUG source=payload.go:182 msg=extracting variant=metal file=build/darwin/arm64/metal/bin/ggml-common.h.gz
time=2024-08-24T20:18:58.267+02:00 level=DEBUG source=payload.go:182 msg=extracting variant=metal file=build/darwin/arm64/metal/bin/ggml-metal.metal.gz
time=2024-08-24T20:18:58.267+02:00 level=DEBUG source=payload.go:182 msg=extracting variant=metal file=build/darwin/arm64/metal/bin/ollama_llama_server.gz
time=2024-08-24T20:18:58.290+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/var/folders/gc/v8tx0lzx4qg7tt1rl88wzgwr0000gn/T/ollama4145360689/runners/metal/ollama_llama_server
time=2024-08-24T20:18:58.290+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]"
time=2024-08-24T20:18:58.290+02:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-08-24T20:18:58.290+02:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-08-24T20:18:58.338+02:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="10.7 GiB" available="10.7 GiB"
time=2024-08-24T20:19:07.474+02:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x100950390 gpu_count=1
time=2024-08-24T20:19:07.489+02:00 level=DEBUG source=sched.go:219 msg="loading first model" model=/path-to-ollama/.ollama/models/blobs/sha256-405b56374e02b21122ae1469db646be0617c02928fd78e246723ebbb98dbca3e
time=2024-08-24T20:19:07.489+02:00 level=DEBUG source=memory.go:101 msg=evaluating library=metal gpu_count=1 available="[10.7 GiB]"
time=2024-08-24T20:19:07.490+02:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/path-to-ollama/.ollama/models/blobs/sha256-405b56374e02b21122ae1469db646be0617c02928fd78e246723ebbb98dbca3e gpu=0 parallel=4 available=11453251584 required="1.9 GiB"
time=2024-08-24T20:19:07.490+02:00 level=DEBUG source=server.go:101 msg="system memory" total="16.0 GiB" free="4.0 GiB" free_swap="0 B"
time=2024-08-24T20:19:07.490+02:00 level=DEBUG source=memory.go:101 msg=evaluating library=metal gpu_count=1 available="[10.7 GiB]"
time=2024-08-24T20:19:07.490+02:00 level=INFO source=memory.go:309 msg="offload to metal" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[10.7 GiB]" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="927.4 MiB" memory.weights.repeating="744.8 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="299.8 MiB"
time=2024-08-24T20:19:07.491+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/var/folders/gc/v8tx0lzx4qg7tt1rl88wzgwr0000gn/T/ollama4145360689/runners/metal/ollama_llama_server
time=2024-08-24T20:19:07.491+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/var/folders/gc/v8tx0lzx4qg7tt1rl88wzgwr0000gn/T/ollama4145360689/runners/metal/ollama_llama_server
time=2024-08-24T20:19:07.492+02:00 level=INFO source=server.go:393 msg="starting llama server" cmd="/var/folders/gc/v8tx0lzx4qg7tt1rl88wzgwr0000gn/T/ollama4145360689/runners/metal/ollama_llama_server --model /path-to-ollama/.ollama/models/blobs/sha256-405b56374e02b21122ae1469db646be0617c02928fd78e246723ebbb98dbca3e --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --verbose --parallel 4 --port 54635"
time=2024-08-24T20:19:07.492+02:00 level=DEBUG source=server.go:410 msg=subprocess environment="[PATH=/opt/homebrew/opt/ruby/bin:/path-to-ollama/.mint/bin:/Applications/Sublime Merge.app/Contents/SharedSupport/bin:/Applications/Sublime Text.app/Contents/SharedSupport/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Library/Apple/usr/bin:/Applications/Little Snitch.app/Contents/Components:/path-to-ollama/.cargo/bin:/Applications/kitty.app/Contents/MacOS LD_LIBRARY_PATH=/var/folders/gc/v8tx0lzx4qg7tt1rl88wzgwr0000gn/T/ollama4145360689/runners/metal:/var/folders/gc/v8tx0lzx4qg7tt1rl88wzgwr0000gn/T/ollama4145360689/runners]"
time=2024-08-24T20:19:07.493+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-24T20:19:07.493+02:00 level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
time=2024-08-24T20:19:07.494+02:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=3535 commit="1e6f6554" tid="0x1e9306940" timestamp=1724523548
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x1e9306940" timestamp=1724523548 total_threads=10
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="9" port="54635" tid="0x1e9306940" timestamp=1724523548
llama_model_loader: loaded meta data with 21 key-value pairs and 338 tensors from /path-to-ollama/.ollama/models/blobs/sha256-405b56374e02b21122ae1469db646be0617c02928fd78e246723ebbb98dbca3e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-1.5B-Instruct
llama_model_loader: - kv 2: qwen2.block_count u32 = 28
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 1536
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 8960
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 12
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 2
llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_0: 196 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-08-24T20:19:08.250+02:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 293
llm_load_vocab: token to piece cache size = 0.9338 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 151936
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 1536
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 12
llm_load_print_meta: n_head_kv = 2
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 6
llm_load_print_meta: n_embd_k_gqa = 256
llm_load_print_meta: n_embd_v_gqa = 256
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 8960
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 1.54 B
llm_load_print_meta: model size = 885.97 MiB (4.81 BPW)
llm_load_print_meta: general.name = Qwen2-1.5B-Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.30 MiB
ggml_backend_metal_log_allocated_size: allocated buffer, size = 885.97 MiB, ( 886.03 / 10922.67)
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: CPU buffer size = 182.57 MiB
llm_load_tensors: Metal buffer size = 885.97 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name: Apple M1 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
llama_kv_cache_init: Metal KV buffer size = 224.00 MiB
llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.34 MiB
llama_new_context_with_model: Metal compute buffer size = 299.75 MiB
llama_new_context_with_model: CPU compute buffer size = 19.01 MiB
llama_new_context_with_model: graph nodes = 986
llama_new_context_with_model: graph splits = 2
time=2024-08-24T20:19:08.501+02:00 level=DEBUG source=server.go:638 msg="model load progress 1.00"
DEBUG [initialize] initializing slots | n_slots=4 tid="0x1e9306940" timestamp=1724523548
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=0 tid="0x1e9306940" timestamp=1724523548
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=1 tid="0x1e9306940" timestamp=1724523548
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=2 tid="0x1e9306940" timestamp=1724523548
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=3 tid="0x1e9306940" timestamp=1724523548
INFO [main] model loaded | tid="0x1e9306940" timestamp=1724523548
DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="0x1e9306940" timestamp=1724523548
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=0 tid="0x1e9306940" timestamp=1724523548
time=2024-08-24T20:19:08.755+02:00 level=INFO source=server.go:632 msg="llama runner started in 1.26 seconds"
time=2024-08-24T20:19:08.755+02:00 level=DEBUG source=sched.go:458 msg="finished setting up runner" model=/path-to-ollama/.ollama/models/blobs/sha256-405b56374e02b21122ae1469db646be0617c02928fd78e246723ebbb98dbca3e
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1 tid="0x1e9306940" timestamp=1724523548
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=54639 status=200 tid="0x16ba43000" timestamp=1724523548
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=2 tid="0x1e9306940" timestamp=1724523548
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=54640 status=200 tid="0x16bacf000" timestamp=1724523548
time=2024-08-24T20:19:08.777+02:00 level=DEBUG source=routes.go:1363 msg="chat request" images=0 prompt="<|im_start|>user\n\"create sublime text plugin that takes selected text and convert it by applying base64encoding and replacing selected text with the conversion\"<|im_end|>\n<|im_start|>assistant\nCreating a Sublime Text plugin to perform Base64 encoding on selected text and replace it with converted data is a complex task as you are asking for more than one operation. However, I can provide an outline of how such a feature might be implemented in Sublime Text.\n\nHere's the step-by-step guide to creating such a plugin:\n\n1. **Define Keybinds:** First, you need to define the key bindings that trigger the conversion when the user selects text and presses a specific key.\n\n2. **Create the Plugin:** Create a new Sublime Text plugin file (like `sublime_text_plugin.py`). This file should include the necessary functions for handling command execution, event listeners, etc.\n\n3. **Implement Conversion Function:** In this function, you need to convert the selected text using Base64 encoding. You can use libraries like `base64` in Python to do this.\n\n4. **Insert or Replace Selected Text:** Once the conversion is complete, you need to either insert the converted text into the user's selection or replace it if the user previously typed something there.\n5. **Check for Keybinds to Continue:** If the user presses a key to continue, check whether `execute_command` has been called and if not, call it with the correct parameters.\n\n6. **Event Listening:** Add event listeners in Sublime Text itself so that when changes are made to the selected text (e.g., typ
ed characters), they can trigger the conversion.\n\n7. **Error Handling:** Include error handling for situations where the Base64 encoding process fails or if something else goes wrong during the execution of the command.\n8. **Testing:** Ensure your plugin works as expected by testing it with different scenarios and edge cases, such as when there's no text selected in Sublime Text.\n\nPlease note that this is a high-level overview of creating a Sublime Text plugin. The specifics will depend on the programming language you're using for the plugin (in this case, Python), and how you choose to implement the features described above. For full documentation, follow your chosen platform's official documentation or look up examples online.\n\nRemember that creating plugins like these can be a significant commitment, especially if they are complex and need thorough testing before release.<|im_end|>\n<|im_start|>user\n\"create sublime text plugin that takes selected text and convert it by applying base64encoding and replacing selected text with the conversion\"<|im_end|>\n<|im_start|>assistant\n"
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=3 tid="0x1e9306940" timestamp=1724523548
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=4 tid="0x1e9306940" timestamp=1724523548
DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=523 slot_id=0 task_id=4 tid="0x1e9306940" timestamp=1724523548
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=4 tid="0x1e9306940" timestamp=1724523548
DEBUG [print_timings] prompt eval time = 473.73 ms / 523 tokens ( 0.91 ms per token, 1104.00 tokens per second) | n_prompt_tokens_processed=523 n_tokens_second=1103.9997298050373 slot_id=0 t_prompt_processing=473.732 t_token=0.9057973231357553 task_id=4 tid="0x1e9306940" timestamp=1724523553
DEBUG [print_timings] generation eval time = 4179.91 ms / 322 runs ( 12.98 ms per token, 77.04 tokens per second) | n_decoded=322 n_tokens_second=77.0351330446988 slot_id=0 t_token=12.981090062111802 t_token_generation=4179.911 task_id=4 tid="0x1e9306940" timestamp=1724523553
DEBUG [print_timings] total time = 4653.64 ms | slot_id=0 t_prompt_processing=473.732 t_token_generation=4179.911 t_total=4653.643 task_id=4 tid="0x1e9306940" timestamp=1724523553
DEBUG [update_slots] slot released | n_cache_tokens=845 n_ctx=8192 n_past=844 n_system_tokens=0 slot_id=0 task_id=4 tid="0x1e9306940" timestamp=1724523553 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=54640 status=200 tid="0x16bacf000" timestamp=1724523553
[GIN] 2024/08/24 - 20:19:13 | 200 | 5.980737958s | 127.0.0.1 | POST "/api/chat"
time=2024-08-24T20:19:13.432+02:00 level=DEBUG source=sched.go:462 msg="context for request finished"
time=2024-08-24T20:19:13.432+02:00 level=DEBUG source=sched.go:334 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/path-to-ollama/.ollama/models/blobs/sha256-405b56374e02b21122ae1469db646be0617c02928fd78e246723ebbb98dbca3e duration=5m0s
time=2024-08-24T20:19:13.432+02:00 level=DEBUG source=sched.go:352 msg="after processing request finished event" modelPath=/path-to-ollama/.ollama/models/blobs/sha256-405b56374e02b21122ae1469db646be0617c02928fd78e246723ebbb98dbca3e refCount=0
time=2024-08-24T20:21:08.524+02:00 level=DEBUG source=sched.go:571 msg="evaluating already loaded" model=/path-to-ollama/.ollama/models/blobs/sha256-405b56374e02b21122ae1469db646be0617c02928fd78e246723ebbb98dbca3e
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=329 tid="0x1e9306940" timestamp=1724523668
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=330 tid="0x1e9306940" timestamp=1724523668
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=54664 status=200 tid="0x16bb5b000" timestamp=1724523668
time=2024-08-24T20:21:08.527+02:00 level=DEBUG source=routes.go:1363 msg="chat request" images=0 prompt="<|im_start|>user\n\"create sublime text plugin that takes selected text and convert it by applying base64encoding and replacing selected text with the conversion\"<|im_end|>\n<|im_start|>assistant\n"
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=331 tid="0x1e9306940" timestamp=1724523668
DEBUG [prefix_slot] slot with common prefix found | 0=["slot_id",0,"characters",193]
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=332 tid="0x1e9306940" timestamp=1724523668
DEBUG [update_slots] slot progression | ga_i=0 n_past=34 n_past_se=0 n_prompt_tokens_processed=34 slot_id=0 task_id=332 tid="0x1e9306940" timestamp=1724523668
DEBUG [update_slots] we have to evaluate at least 1 token to generate logits | slot_id=0 task_id=332 tid="0x1e9306940" timestamp=1724523668
DEBUG [update_slots] kv cache rm [p0, end) | p0=33 slot_id=0 task_id=332 tid="0x1e9306940" timestamp=1724523668
DEBUG [print_timings] prompt eval time = 166.62 ms / 34 tokens ( 4.90 ms per token, 204.05 tokens per second) | n_prompt_tokens_processed=34 n_tokens_second=204.0546866560238 slot_id=0 t_prompt_processing=166.622 t_token=4.90064705882353 task_id=332 tid="0x1e9306940" timestamp=1724523677
DEBUG [print_timings] generation eval time = 8888.16 ms / 596 runs ( 14.91 ms per token, 67.06 tokens per second) | n_decoded=596 n_tokens_second=67.0554910065198 slot_id=0 t_token=14.913021812080537 t_token_generation=8888.161 task_id=332 tid="0x1e9306940" timestamp=1724523677
DEBUG [print_timings] total time = 9054.78 ms | slot_id=0 t_prompt_processing=166.622 t_token_generation=8888.161 t_total=9054.783 task_id=332 tid="0x1e9306940" timestamp=1724523677
DEBUG [update_slots] slot released | n_cache_tokens=630 n_ctx=8192 n_past=629 n_system_tokens=0 slot_id=0 task_id=332 tid="0x1e9306940" timestamp=1724523677 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=54664 status=200 tid="0x16bb5b000" timestamp=1724523677
[GIN] 2024/08/24 - 20:21:17 | 200 | 9.10044125s | 127.0.0.1 | POST "/v1/chat/completions"
time=2024-08-24T20:21:17.584+02:00 level=DEBUG source=sched.go:403 msg="context for request finished"
time=2024-08-24T20:21:17.584+02:00 level=DEBUG source=sched.go:334 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/path-to-ollama/.ollama/models/blobs/sha256-405b56374e02b21122ae1469db646be0617c02928fd78e246723ebbb98dbca3e duration=5m0s
time=2024-08-24T20:21:17.584+02:00 level=DEBUG source=sched.go:352 msg="after processing request finished event" modelPath=/path-to-ollama/.ollama/models/blobs/sha256-405b56374e02b21122ae1469db646be0617c02928fd78e246723ebbb98dbca3e refCount=0
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
ollama version is 0.3.6
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6492/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5086
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5086/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5086/comments
|
https://api.github.com/repos/ollama/ollama/issues/5086/events
|
https://github.com/ollama/ollama/issues/5086
| 2,355,946,611
|
I_kwDOJ0Z1Ps6MbORz
| 5,086
|
`TextMonkey` model
|
{
"login": "insinfo",
"id": 12227024,
"node_id": "MDQ6VXNlcjEyMjI3MDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/12227024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/insinfo",
"html_url": "https://github.com/insinfo",
"followers_url": "https://api.github.com/users/insinfo/followers",
"following_url": "https://api.github.com/users/insinfo/following{/other_user}",
"gists_url": "https://api.github.com/users/insinfo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/insinfo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/insinfo/subscriptions",
"organizations_url": "https://api.github.com/users/insinfo/orgs",
"repos_url": "https://api.github.com/users/insinfo/repos",
"events_url": "https://api.github.com/users/insinfo/events{/privacy}",
"received_events_url": "https://api.github.com/users/insinfo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-16T19:46:28
| 2024-06-18T11:37:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
In my quick tests on the demo, it seems to be the best document understanding and OCR model I have ever tested, my current use case is that I have to identify the process code of 1500000 images manually (a challenging job) (I am wondering if this model will be able to do this for me)
I have to identify from an image like the one below what the process code/year is in each image

[573.pdf](https://github.com/user-attachments/files/15859661/573.pdf)
https://github.com/Yuliang-Liu/Monkey?tab=readme-ov-file
[TextMonkey](https://arxiv.org/abs/2403.04473)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5086/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4163
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4163/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4163/comments
|
https://api.github.com/repos/ollama/ollama/issues/4163/events
|
https://github.com/ollama/ollama/issues/4163
| 2,279,325,554
|
I_kwDOJ0Z1Ps6H279y
| 4,163
|
llava broke in new version v0.1.33
|
{
"login": "VideoFX",
"id": 47264978,
"node_id": "MDQ6VXNlcjQ3MjY0OTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/47264978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VideoFX",
"html_url": "https://github.com/VideoFX",
"followers_url": "https://api.github.com/users/VideoFX/followers",
"following_url": "https://api.github.com/users/VideoFX/following{/other_user}",
"gists_url": "https://api.github.com/users/VideoFX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VideoFX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VideoFX/subscriptions",
"organizations_url": "https://api.github.com/users/VideoFX/orgs",
"repos_url": "https://api.github.com/users/VideoFX/repos",
"events_url": "https://api.github.com/users/VideoFX/events{/privacy}",
"received_events_url": "https://api.github.com/users/VideoFX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 14
| 2024-05-05T04:50:49
| 2024-05-17T10:01:00
| 2024-05-06T23:17:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama v0.1.33
Intel Core i9 14900K 64GB ram
Nvidia RTX 4070
llava only works for the first inference attempt. All attempts afterwards make up strange descriptions not related to the image, almost like its looking at a different picture.
This also happens with llava:13b. It will work the first time after loading. After that, broken.
This also happens on other windows machines with different Intel and Nvidia combinations.
I have updated Ollama, and redownloaded the llava models.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4163/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2230
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2230/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2230/comments
|
https://api.github.com/repos/ollama/ollama/issues/2230/events
|
https://github.com/ollama/ollama/issues/2230
| 2,103,508,787
|
I_kwDOJ0Z1Ps59YP8z
| 2,230
|
Ollama (llama2) running in VM Box on Ubuntu but /api/generate not working
|
{
"login": "Marvin-VW",
"id": 82050751,
"node_id": "MDQ6VXNlcjgyMDUwNzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/82050751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Marvin-VW",
"html_url": "https://github.com/Marvin-VW",
"followers_url": "https://api.github.com/users/Marvin-VW/followers",
"following_url": "https://api.github.com/users/Marvin-VW/following{/other_user}",
"gists_url": "https://api.github.com/users/Marvin-VW/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Marvin-VW/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Marvin-VW/subscriptions",
"organizations_url": "https://api.github.com/users/Marvin-VW/orgs",
"repos_url": "https://api.github.com/users/Marvin-VW/repos",
"events_url": "https://api.github.com/users/Marvin-VW/events{/privacy}",
"received_events_url": "https://api.github.com/users/Marvin-VW/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-27T11:09:00
| 2024-06-25T06:49:15
| 2024-01-27T13:08:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey, i pulled llama2 as described and im running it with 'ollama run llama2'
It is working inside the terminal with no errors, but as soon as i try to to reach it via
`curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt":"Why is the sky blue?"
}'`
it just says:
`{"error":"model "llama2" not found, try pulling it first"}`
|
{
"login": "Marvin-VW",
"id": 82050751,
"node_id": "MDQ6VXNlcjgyMDUwNzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/82050751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Marvin-VW",
"html_url": "https://github.com/Marvin-VW",
"followers_url": "https://api.github.com/users/Marvin-VW/followers",
"following_url": "https://api.github.com/users/Marvin-VW/following{/other_user}",
"gists_url": "https://api.github.com/users/Marvin-VW/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Marvin-VW/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Marvin-VW/subscriptions",
"organizations_url": "https://api.github.com/users/Marvin-VW/orgs",
"repos_url": "https://api.github.com/users/Marvin-VW/repos",
"events_url": "https://api.github.com/users/Marvin-VW/events{/privacy}",
"received_events_url": "https://api.github.com/users/Marvin-VW/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2230/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7628
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7628/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7628/comments
|
https://api.github.com/repos/ollama/ollama/issues/7628/events
|
https://github.com/ollama/ollama/pull/7628
| 2,651,706,315
|
PR_kwDOJ0Z1Ps6BnJP2
| 7,628
|
test PR
|
{
"login": "kavita-rane2",
"id": 175689274,
"node_id": "U_kgDOCnjOOg",
"avatar_url": "https://avatars.githubusercontent.com/u/175689274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kavita-rane2",
"html_url": "https://github.com/kavita-rane2",
"followers_url": "https://api.github.com/users/kavita-rane2/followers",
"following_url": "https://api.github.com/users/kavita-rane2/following{/other_user}",
"gists_url": "https://api.github.com/users/kavita-rane2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kavita-rane2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kavita-rane2/subscriptions",
"organizations_url": "https://api.github.com/users/kavita-rane2/orgs",
"repos_url": "https://api.github.com/users/kavita-rane2/repos",
"events_url": "https://api.github.com/users/kavita-rane2/events{/privacy}",
"received_events_url": "https://api.github.com/users/kavita-rane2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-12T10:15:01
| 2024-11-12T17:49:21
| 2024-11-12T17:49:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7628",
"html_url": "https://github.com/ollama/ollama/pull/7628",
"diff_url": "https://github.com/ollama/ollama/pull/7628.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7628.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7628/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/446
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/446/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/446/comments
|
https://api.github.com/repos/ollama/ollama/issues/446/events
|
https://github.com/ollama/ollama/pull/446
| 1,875,075,893
|
PR_kwDOJ0Z1Ps5ZNuiV
| 446
|
Add a warning for if digests are missing
|
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-08-31T08:50:24
| 2023-08-31T12:16:05
| 2023-08-31T12:16:04
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/446",
"html_url": "https://github.com/ollama/ollama/pull/446",
"diff_url": "https://github.com/ollama/ollama/pull/446.diff",
"patch_url": "https://github.com/ollama/ollama/pull/446.patch",
"merged_at": null
}
| null |
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/446/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6636
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6636/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6636/comments
|
https://api.github.com/repos/ollama/ollama/issues/6636/events
|
https://github.com/ollama/ollama/issues/6636
| 2,505,843,666
|
I_kwDOJ0Z1Ps6VXCPS
| 6,636
|
Install script not reporting issue with systemd
|
{
"login": "cfjedimaster",
"id": 393660,
"node_id": "MDQ6VXNlcjM5MzY2MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/393660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cfjedimaster",
"html_url": "https://github.com/cfjedimaster",
"followers_url": "https://api.github.com/users/cfjedimaster/followers",
"following_url": "https://api.github.com/users/cfjedimaster/following{/other_user}",
"gists_url": "https://api.github.com/users/cfjedimaster/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cfjedimaster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cfjedimaster/subscriptions",
"organizations_url": "https://api.github.com/users/cfjedimaster/orgs",
"repos_url": "https://api.github.com/users/cfjedimaster/repos",
"events_url": "https://api.github.com/users/cfjedimaster/events{/privacy}",
"received_events_url": "https://api.github.com/users/cfjedimaster/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-04T16:39:35
| 2024-11-18T23:02:42
| 2024-11-18T23:02:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This is based on issue #6204 - I ran the installer in Ubuntu running under WSL (WIndows Subsystem for Linux). The install ran perfectly fine with no error messages. However, systemd wasn't enabled and therefore Ollama was not set up as a service. This bug report is basically to state that the installer never threw an error or warning about the issue.
### OS
WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6636/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6636/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8394
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8394/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8394/comments
|
https://api.github.com/repos/ollama/ollama/issues/8394/events
|
https://github.com/ollama/ollama/issues/8394
| 2,782,329,407
|
I_kwDOJ0Z1Ps6l1vo_
| 8,394
|
The same model could load all onto the GPU last year. Today, after upgrading to ollama, I found that it cannot be loaded onto the GPU all at once.
|
{
"login": "21307369",
"id": 47931342,
"node_id": "MDQ6VXNlcjQ3OTMxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47931342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/21307369",
"html_url": "https://github.com/21307369",
"followers_url": "https://api.github.com/users/21307369/followers",
"following_url": "https://api.github.com/users/21307369/following{/other_user}",
"gists_url": "https://api.github.com/users/21307369/gists{/gist_id}",
"starred_url": "https://api.github.com/users/21307369/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/21307369/subscriptions",
"organizations_url": "https://api.github.com/users/21307369/orgs",
"repos_url": "https://api.github.com/users/21307369/repos",
"events_url": "https://api.github.com/users/21307369/events{/privacy}",
"received_events_url": "https://api.github.com/users/21307369/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2025-01-12T07:26:24
| 2025-01-14T21:27:11
| 2025-01-14T21:26:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The situation is as follows, version 0.2.1ollama can fully load the GPU with codegeex4:latest. However, after upgrading to 0.5.4 today, I am unable to fully load the GPU. My GPU is a 6750GRE with 12GB of VRAM. I found something wrong with this... The size of this model is 5.5G. Observing the resource usage, the VRAM usage is around 3G, with the rest in memory.
I am using a smaller model, hhao/qwen2.5-coder-tools:3b, which is 1.19GB in size. It will indeed occupy all the GPU memory. The speed is also normal.
hhao/qwen2.5-coder-tools:3b


codegeex4:

### OS
Windows
### GPU
AMD
### CPU
Intel
### Ollama version
0.5.4-0-g08b8916
|
{
"login": "21307369",
"id": 47931342,
"node_id": "MDQ6VXNlcjQ3OTMxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47931342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/21307369",
"html_url": "https://github.com/21307369",
"followers_url": "https://api.github.com/users/21307369/followers",
"following_url": "https://api.github.com/users/21307369/following{/other_user}",
"gists_url": "https://api.github.com/users/21307369/gists{/gist_id}",
"starred_url": "https://api.github.com/users/21307369/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/21307369/subscriptions",
"organizations_url": "https://api.github.com/users/21307369/orgs",
"repos_url": "https://api.github.com/users/21307369/repos",
"events_url": "https://api.github.com/users/21307369/events{/privacy}",
"received_events_url": "https://api.github.com/users/21307369/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8394/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/482
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/482/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/482/comments
|
https://api.github.com/repos/ollama/ollama/issues/482/events
|
https://github.com/ollama/ollama/pull/482
| 1,885,279,069
|
PR_kwDOJ0Z1Ps5Zv_OU
| 482
|
[docs] Improve build instructions
|
{
"login": "apepper",
"id": 86275,
"node_id": "MDQ6VXNlcjg2Mjc1",
"avatar_url": "https://avatars.githubusercontent.com/u/86275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apepper",
"html_url": "https://github.com/apepper",
"followers_url": "https://api.github.com/users/apepper/followers",
"following_url": "https://api.github.com/users/apepper/following{/other_user}",
"gists_url": "https://api.github.com/users/apepper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apepper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apepper/subscriptions",
"organizations_url": "https://api.github.com/users/apepper/orgs",
"repos_url": "https://api.github.com/users/apepper/repos",
"events_url": "https://api.github.com/users/apepper/events{/privacy}",
"received_events_url": "https://api.github.com/users/apepper/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-07T07:27:13
| 2023-09-07T10:59:45
| 2023-09-07T10:43:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/482",
"html_url": "https://github.com/ollama/ollama/pull/482",
"diff_url": "https://github.com/ollama/ollama/pull/482.diff",
"patch_url": "https://github.com/ollama/ollama/pull/482.patch",
"merged_at": "2023-09-07T10:43:26"
}
|
Go is required and not installed by default.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/482/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/482/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6281
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6281/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6281/comments
|
https://api.github.com/repos/ollama/ollama/issues/6281/events
|
https://github.com/ollama/ollama/pull/6281
| 2,457,517,648
|
PR_kwDOJ0Z1Ps537WSt
| 6,281
|
docs(tools): add ingest
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-09T09:34:08
| 2024-08-14T22:24:23
| 2024-08-14T22:24:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6281",
"html_url": "https://github.com/ollama/ollama/pull/6281",
"diff_url": "https://github.com/ollama/ollama/pull/6281.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6281.patch",
"merged_at": null
}
|
Add ingest to the list of ollama integrated tools https://github.com/sammcj/ingest


|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6281/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6281/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5368
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5368/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5368/comments
|
https://api.github.com/repos/ollama/ollama/issues/5368/events
|
https://github.com/ollama/ollama/pull/5368
| 2,381,354,023
|
PR_kwDOJ0Z1Ps5z8QF9
| 5,368
|
Do not shift context for sliding window models
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-29T00:49:26
| 2024-06-29T02:39:33
| 2024-06-29T02:39:31
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5368",
"html_url": "https://github.com/ollama/ollama/pull/5368",
"diff_url": "https://github.com/ollama/ollama/pull/5368.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5368.patch",
"merged_at": "2024-06-29T02:39:31"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5368/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6472
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6472/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6472/comments
|
https://api.github.com/repos/ollama/ollama/issues/6472/events
|
https://github.com/ollama/ollama/issues/6472
| 2,482,632,801
|
I_kwDOJ0Z1Ps6T-fhh
| 6,472
|
404 one download
|
{
"login": "vorticalbox",
"id": 10886065,
"node_id": "MDQ6VXNlcjEwODg2MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/10886065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vorticalbox",
"html_url": "https://github.com/vorticalbox",
"followers_url": "https://api.github.com/users/vorticalbox/followers",
"following_url": "https://api.github.com/users/vorticalbox/following{/other_user}",
"gists_url": "https://api.github.com/users/vorticalbox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vorticalbox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vorticalbox/subscriptions",
"organizations_url": "https://api.github.com/users/vorticalbox/orgs",
"repos_url": "https://api.github.com/users/vorticalbox/repos",
"events_url": "https://api.github.com/users/vorticalbox/events{/privacy}",
"received_events_url": "https://api.github.com/users/vorticalbox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-08-23T08:40:17
| 2024-12-02T21:54:11
| 2024-12-02T21:54:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
https://ollama.com/download/ollama-linux-amd64.tgz is returning a 404
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6472/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4267
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4267/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4267/comments
|
https://api.github.com/repos/ollama/ollama/issues/4267/events
|
https://github.com/ollama/ollama/issues/4267
| 2,286,575,046
|
I_kwDOJ0Z1Ps6ISl3G
| 4,267
|
ollama_llama_server is still running after exiting via SIGINT
|
{
"login": "RobbyCBennett",
"id": 22121365,
"node_id": "MDQ6VXNlcjIyMTIxMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22121365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobbyCBennett",
"html_url": "https://github.com/RobbyCBennett",
"followers_url": "https://api.github.com/users/RobbyCBennett/followers",
"following_url": "https://api.github.com/users/RobbyCBennett/following{/other_user}",
"gists_url": "https://api.github.com/users/RobbyCBennett/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobbyCBennett/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobbyCBennett/subscriptions",
"organizations_url": "https://api.github.com/users/RobbyCBennett/orgs",
"repos_url": "https://api.github.com/users/RobbyCBennett/repos",
"events_url": "https://api.github.com/users/RobbyCBennett/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobbyCBennett/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-05-08T22:18:30
| 2024-05-09T22:58:46
| 2024-05-09T22:58:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I kill `ollama serve` with control-C with the keyboard, it closes `ollama_llama_server` and therefore it all exits properly. However, when I kill it in another way like `kill -2` then `ollama_llama_server` still runs afterward.
Replicate:
1. `OLLAMA_HOST=localhost:6767 ollama serve &`
2. Note the process ID
3. `ollama pull llama3` if you don't already have this model
4. Post a chat
- Run the curl command below OR
- Run the python script included below
5. `kill -2 PROCESS_ID_HERE` (2 is SIGINT, just like control-C)
```sh
# curl command to post a chat
curl http://localhost:6767/api/chat -d '{
"model": "llama3",
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
}
]
}'
```
```py
#! /usr/bin/env python3
# python script to post a chat
from llama_index.core import Document, Settings, VectorStoreIndex
from llama_index.embeddings.ollama import OllamaEmbedding
from llama_index.llms.ollama import Ollama
MODEL = 'llama3'
TEMPERATURE = 0.2
PORT = 6767
PROMPT = 'What is your favorite color out of the colors listed?'
def createLLM():
llm = Ollama(
base_url=f'http://localhost:{PORT}',
model=MODEL,
temperature=TEMPERATURE,
request_timeout=60.0, # seconds
)
Settings.llm = llm
Settings.embed_model = OllamaEmbedding(model_name=MODEL)
def main():
createLLM()
index = VectorStoreIndex.from_documents([Document(id_='colors', text='red, yellow, blue')])
query_engine = index.as_query_engine(streaming=True)
print(PROMPT)
response = str(query_engine.query(PROMPT))
print(response)
if __name__ == '__main__':
main()
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.34
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4267/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1459
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1459/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1459/comments
|
https://api.github.com/repos/ollama/ollama/issues/1459/events
|
https://github.com/ollama/ollama/issues/1459
| 2,034,781,666
|
I_kwDOJ0Z1Ps55SE3i
| 1,459
|
LiteLLM does not forward temperature to Ollama models
|
{
"login": "scpedicini",
"id": 2040540,
"node_id": "MDQ6VXNlcjIwNDA1NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2040540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scpedicini",
"html_url": "https://github.com/scpedicini",
"followers_url": "https://api.github.com/users/scpedicini/followers",
"following_url": "https://api.github.com/users/scpedicini/following{/other_user}",
"gists_url": "https://api.github.com/users/scpedicini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scpedicini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scpedicini/subscriptions",
"organizations_url": "https://api.github.com/users/scpedicini/orgs",
"repos_url": "https://api.github.com/users/scpedicini/repos",
"events_url": "https://api.github.com/users/scpedicini/events{/privacy}",
"received_events_url": "https://api.github.com/users/scpedicini/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-11T04:04:19
| 2023-12-11T04:06:01
| 2023-12-11T04:06:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
There seems to be an issue with temperature setting not being properly passed through LiteLLM to Ollama.
When running against Ollama API directly
```bash
curl http://localhost:11434/api/chat -d '{
"model": "mistral",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Write a single paragraph about DNA."
}
],
"options": {
"temperature": 0.0
},
"stream": false
}'
```
Response
```json
{"model":"mistral","created_at":"2023-12-11T03:45:37.121290844Z","message":{"role":"assistant","content":"DNA, short for deoxyribonucleic acid, is a complex, long-chain molecule that carries the genetic code and instructions used in the growth, development, functioning, and reproduction of all living organisms. It is composed of four chemical building blocks or nucleotides, which are adenine (A), guanine (G), cytosine (C), and thymine (T). The sequence of these nucleotides within DNA determines the genetic code, which is unique to each individual and passed down from parents during reproduction. DNA is organized into 23 pairs of chromosomes, which contain all the genetic information necessary for the development and survival of an organism."},"done":true,"total_duration":16615312883,"prompt_eval_count":22,"prompt_eval_duration":2769643000,"eval_count":148,"eval_duration":13834034000}%
```
Ran this several times and produced exact same output which is what I would expect given temperature of 0.0.
However, when the same command is sent to LiteLLM which is connected to Ollama via the config YAML:
```yaml
model_list:
- model_name: gpt-3.5-turbo # user-facing model alias
litellm_params: # all params accepted by litellm.completion() - https://docs.litellm.ai/docs/completion/input
model: ollama/mistral
api_base: http://ollama:11434
litellm_settings:
drop_params: True
set_verbose: True
```
Hitting `chat/completions` LiteLLM:
```bash
curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Write a single paragraph about DNA."
}
],
"stream": false,
"temperature": 0.0
}'
```
**Response 1**
```json
{"id":"chatcmpl-a877894a-cbf1-42c0-8948-5b4fcc988614","choices":[{"finish_reason":"stop","index":0,"message":{"content":" DNA, short for deoxyribonucleic acid, is a complex molecule that contains the genetic information necessary for the growth, development, and reproduction of all living organisms. It is composed of long strands of nucleotides, which are the building blocks of DNA. These nucleotides are made up of a sugar molecule, a phosphate group, and one of four nitrogenous bases: adenine (A), guanine (G), cytosine (C), and thymine (T). The sequence of these bases within DNA determines the genetic code, which is used to create and control the characteristics and functions of cells, tissues and organs. DNA replication, or the process of copying DNA, is essential for cell division and the transmission of genetic information from one generation to the next.","role":"assistant"}}],"created":1702266508,"model":"ollama/mistral","object":"chat.completion","system_fingerprint":null,"usage":{"prompt_tokens":12,"completion_tokens":159,"total_tokens":171}}%
```
**Response 2**
```json
{"id":"chatcmpl-715a5f08-d1b8-42fe-94dd-493da90763c8","choices":[{"finish_reason":"stop","index":0,"message":{"content":"DNA, or deoxyribonucleic acid, is a complex, double-stranded molecule that carries the genetic code and instructions for the development, functioning and reproduction of all living organisms. It consists of four chemical building blocks, called nucleotides, which are adenine (A), cytosine (C), guanine (G) and thymine (T). The molecule is structured in a twisted, ladder-like formation known as the double helix, with the sugar-phosphate backbone on the outside and the nucleotides paired up on the inside. DNA replication, or the process of copying the genetic code, is essential for cell division and the transmission of traits from one generation to the next.","role":"assistant"}}],"created":1702266527,"model":"ollama/mistral","object":"chat.completion","system_fingerprint":null,"usage":{"prompt_tokens":12,"completion_tokens":143,"total_tokens":155}}%
```
Also something to note - I don't think that LiteLLM is doing the proper templatized conversion correctly. LiteLLM verbose log shows this is what it is sending to ollama:
```
2023-12-10 21:48:47 POST Request Sent from LiteLLM:
2023-12-10 21:48:47 curl -X POST \
2023-12-10 21:48:47 http://ollama:11434/api/generate \
2023-12-10 21:48:47 -d '{'model': 'mistral', 'prompt': 'You are a helpful assistant.Write a single paragraph about DNA.', 'temperature': 0.0}'
```
It looks like LiteLLM is just concatenating all the role contents into a single prompt (`assistant`, `system`, `user`).
Also, Ollama takes an `options` dictionary for its parameters (`temperature`, `frequency_penalty`, etc.)
https://github.com/jmorganca/ollama/blob/main/docs/api.md
I think this is the issue. Additionally, Ollama pushed a new update to their repo (and docker image) which adds a new api endpoint for chat messages:
```bash
curl http://localhost:11434/api/chat -d '{
"model": "llama2",
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
}
]
}'
```
I think that LiteLLM might need to switch between `/api/generate` and `/api/chat` depending on the model and data passed, or depending on if somebody uses `openai.completions.create` vs `openai.chat.completions.create`.
|
{
"login": "scpedicini",
"id": 2040540,
"node_id": "MDQ6VXNlcjIwNDA1NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2040540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scpedicini",
"html_url": "https://github.com/scpedicini",
"followers_url": "https://api.github.com/users/scpedicini/followers",
"following_url": "https://api.github.com/users/scpedicini/following{/other_user}",
"gists_url": "https://api.github.com/users/scpedicini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scpedicini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scpedicini/subscriptions",
"organizations_url": "https://api.github.com/users/scpedicini/orgs",
"repos_url": "https://api.github.com/users/scpedicini/repos",
"events_url": "https://api.github.com/users/scpedicini/events{/privacy}",
"received_events_url": "https://api.github.com/users/scpedicini/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1459/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3395
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3395/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3395/comments
|
https://api.github.com/repos/ollama/ollama/issues/3395/events
|
https://github.com/ollama/ollama/issues/3395
| 2,214,181,524
|
I_kwDOJ0Z1Ps6D-bqU
| 3,395
|
Print better error message when a new version of Ollama is required
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-03-28T21:22:13
| 2024-04-19T15:41:38
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Today Ollama prints hard to recognize error messages when a model isn't supported because a new version of Ollama is required
### What did you expect to see?
An error along the lines of: `Error: a new version of Ollama is required to run this model`. Even better would be to include the version number required.
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3395/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4149
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4149/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4149/comments
|
https://api.github.com/repos/ollama/ollama/issues/4149/events
|
https://github.com/ollama/ollama/pull/4149
| 2,278,888,122
|
PR_kwDOJ0Z1Ps5uiW65
| 4,149
|
fix: format go code
|
{
"login": "alwqx",
"id": 9915368,
"node_id": "MDQ6VXNlcjk5MTUzNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9915368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alwqx",
"html_url": "https://github.com/alwqx",
"followers_url": "https://api.github.com/users/alwqx/followers",
"following_url": "https://api.github.com/users/alwqx/following{/other_user}",
"gists_url": "https://api.github.com/users/alwqx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alwqx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alwqx/subscriptions",
"organizations_url": "https://api.github.com/users/alwqx/orgs",
"repos_url": "https://api.github.com/users/alwqx/repos",
"events_url": "https://api.github.com/users/alwqx/events{/privacy}",
"received_events_url": "https://api.github.com/users/alwqx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-04T09:36:38
| 2024-05-06T10:57:45
| 2024-05-05T23:08:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4149",
"html_url": "https://github.com/ollama/ollama/pull/4149",
"diff_url": "https://github.com/ollama/ollama/pull/4149.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4149.patch",
"merged_at": "2024-05-05T23:08:09"
}
|
Hi, I find some go code is not formatted, So I run `gofmt -w .` to format them.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4149/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8584
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8584/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8584/comments
|
https://api.github.com/repos/ollama/ollama/issues/8584/events
|
https://github.com/ollama/ollama/issues/8584
| 2,811,112,982
|
I_kwDOJ0Z1Ps6nji4W
| 8,584
|
Error: "not authorized to push"
|
{
"login": "NLP-man",
"id": 174748562,
"node_id": "U_kgDOCmpzkg",
"avatar_url": "https://avatars.githubusercontent.com/u/174748562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NLP-man",
"html_url": "https://github.com/NLP-man",
"followers_url": "https://api.github.com/users/NLP-man/followers",
"following_url": "https://api.github.com/users/NLP-man/following{/other_user}",
"gists_url": "https://api.github.com/users/NLP-man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NLP-man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NLP-man/subscriptions",
"organizations_url": "https://api.github.com/users/NLP-man/orgs",
"repos_url": "https://api.github.com/users/NLP-man/repos",
"events_url": "https://api.github.com/users/NLP-man/events{/privacy}",
"received_events_url": "https://api.github.com/users/NLP-man/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-25T18:18:43
| 2025-01-25T20:47:45
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
At first i have pull a model using ollama (ollama pull qwen:0.5b) then i wanted to push this model and using this command ```ollama push qwen:0.5b``` but i got this error:
pushing fad2a06e4cc7... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 394 MB
pushing 41c2cf8c272f... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 7.3 KB
pushing 1da0581fd4ce... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 130 B
pushing f02dd72bb242... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 59 B
pushing ea0a531a015b... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 485 B
pushing manifest
Error: you are not authorized to push to this namespace, create the model under a namespace you own
is there anyone that have encountered with this issue?!
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8584/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2648
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2648/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2648/comments
|
https://api.github.com/repos/ollama/ollama/issues/2648/events
|
https://github.com/ollama/ollama/issues/2648
| 2,147,548,858
|
I_kwDOJ0Z1Ps6AAP66
| 2,648
|
Windows Defender alert on update to 0.1.26
|
{
"login": "OMGnotThatGuy",
"id": 91296990,
"node_id": "MDQ6VXNlcjkxMjk2OTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/91296990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OMGnotThatGuy",
"html_url": "https://github.com/OMGnotThatGuy",
"followers_url": "https://api.github.com/users/OMGnotThatGuy/followers",
"following_url": "https://api.github.com/users/OMGnotThatGuy/following{/other_user}",
"gists_url": "https://api.github.com/users/OMGnotThatGuy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OMGnotThatGuy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OMGnotThatGuy/subscriptions",
"organizations_url": "https://api.github.com/users/OMGnotThatGuy/orgs",
"repos_url": "https://api.github.com/users/OMGnotThatGuy/repos",
"events_url": "https://api.github.com/users/OMGnotThatGuy/events{/privacy}",
"received_events_url": "https://api.github.com/users/OMGnotThatGuy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2024-02-21T19:43:54
| 2024-02-23T22:23:35
| 2024-02-21T20:32:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I didn't have any issues installing the previous packages, but it seems the latest release triggered a Malware alert in Defender on Windows 11.
**Windows:**
OS Name Microsoft Windows 11 Pro
Version 10.0.22631 Build 22631
**Defender:** - It appears Defender updated its signatures afterwards, so I don't know what version was active when the alert popped.
Security intelligence version: 1.405.380.0
Version created on: 2/21/2024 5:51 AM
Last update: 2/21/2024 2:00 PM

I checked the signatures and they have the same signing cert as the previous version. I uploaded the installer and app executables to VirusTotal and got one flag in addition to my Defender alert, plus some weird sandbox behavior:
[OllamaSetup.exe](https://www.virustotal.com/gui/file/cacb2123e27ce31c065b723061ef6784308d77840ac0d554dd7696beb23fc542/detection) - **Blocked by Windows Defender**
[ollama app.exe](https://www.virustotal.com/gui/file/5b3ca41783194ad89998ac7dae4a192d72cdffa2f4af93d6aa7b930509154cc8/detection) - **Blocked by Windows Defender**
[VirusTotal behavioral analysis](https://www.virustotal.com/gui/file/5b3ca41783194ad89998ac7dae4a192d72cdffa2f4af93d6aa7b930509154cc8/behavior) claimed "ollama app.exe" dropped a copy of GoogleUpdater on their sandbox. I did not see this on my system, but I also don't have any Google software installed. ¯\\\_(ツ)_/¯
[ollama.exe](https://www.virustotal.com/gui/file/5110bd46530744ee84817f2200d0b502076187c9183ff238ed3fddf5a09bf580/detection) - **One additional detection on VirusTotal**
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2648/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2648/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4168
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4168/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4168/comments
|
https://api.github.com/repos/ollama/ollama/issues/4168/events
|
https://github.com/ollama/ollama/issues/4168
| 2,279,511,789
|
I_kwDOJ0Z1Ps6H3pbt
| 4,168
|
Support for whisper models in Ollama
|
{
"login": "gkiri",
"id": 25444878,
"node_id": "MDQ6VXNlcjI1NDQ0ODc4",
"avatar_url": "https://avatars.githubusercontent.com/u/25444878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gkiri",
"html_url": "https://github.com/gkiri",
"followers_url": "https://api.github.com/users/gkiri/followers",
"following_url": "https://api.github.com/users/gkiri/following{/other_user}",
"gists_url": "https://api.github.com/users/gkiri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gkiri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gkiri/subscriptions",
"organizations_url": "https://api.github.com/users/gkiri/orgs",
"repos_url": "https://api.github.com/users/gkiri/repos",
"events_url": "https://api.github.com/users/gkiri/events{/privacy}",
"received_events_url": "https://api.github.com/users/gkiri/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-05T12:38:40
| 2024-05-05T19:06:25
| 2024-05-05T19:06:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4168/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4168/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1133
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1133/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1133/comments
|
https://api.github.com/repos/ollama/ollama/issues/1133/events
|
https://github.com/ollama/ollama/pull/1133
| 1,993,792,593
|
PR_kwDOJ0Z1Ps5fd4pi
| 1,133
|
initial commit of the readline editor replacement
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-15T00:10:07
| 2024-08-14T20:02:09
| 2024-08-14T20:02:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1133",
"html_url": "https://github.com/ollama/ollama/pull/1133",
"diff_url": "https://github.com/ollama/ollama/pull/1133.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1133.patch",
"merged_at": null
}
|
This change is a full replacement for the current `readline` package that we had introduced before. It builds on that version but now properly handles multi-line input.
Some new features:
* word wrap between lines (no more splitting lines in the middle of a word)
* free movement of the cursor (up/down/left/right)
* full multi-line support (no need to use """)
* allow new lines w/ Ctrl-J (still impossible to allow this w/ shift-enter)
* bracketed paste support (copy and paste into the editor)
There are a few things which are still broken:
* Deleting a line doesn't (yet) clean up the buffer (although will remove the text)
* Moving by word (i.e. forward/backward by word) isn't yet supported
* The delete key is only single line still (although backspace will work across lines)
* There is no "history" support yet for getting old prompts
* I haven't yet added """ support back in, but we potentially don't need it?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1133/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7819
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7819/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7819/comments
|
https://api.github.com/repos/ollama/ollama/issues/7819/events
|
https://github.com/ollama/ollama/pull/7819
| 2,687,966,890
|
PR_kwDOJ0Z1Ps6C8WZU
| 7,819
|
Bring ollama `fileType`s into alignment with llama.cpp.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-11-24T16:26:16
| 2024-11-24T18:33:33
| 2024-11-24T18:33:33
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7819",
"html_url": "https://github.com/ollama/ollama/pull/7819",
"diff_url": "https://github.com/ollama/ollama/pull/7819.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7819.patch",
"merged_at": "2024-11-24T18:33:33"
}
|
Fixes #7816
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7819/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7787
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7787/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7787/comments
|
https://api.github.com/repos/ollama/ollama/issues/7787/events
|
https://github.com/ollama/ollama/issues/7787
| 2,681,697,146
|
I_kwDOJ0Z1Ps6f13N6
| 7,787
|
How to update ollama desktop on windows?
|
{
"login": "Septemberlemon",
"id": 84148797,
"node_id": "MDQ6VXNlcjg0MTQ4Nzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/84148797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Septemberlemon",
"html_url": "https://github.com/Septemberlemon",
"followers_url": "https://api.github.com/users/Septemberlemon/followers",
"following_url": "https://api.github.com/users/Septemberlemon/following{/other_user}",
"gists_url": "https://api.github.com/users/Septemberlemon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Septemberlemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Septemberlemon/subscriptions",
"organizations_url": "https://api.github.com/users/Septemberlemon/orgs",
"repos_url": "https://api.github.com/users/Septemberlemon/repos",
"events_url": "https://api.github.com/users/Septemberlemon/events{/privacy}",
"received_events_url": "https://api.github.com/users/Septemberlemon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-11-22T04:09:36
| 2024-12-06T15:00:06
| 2024-12-06T15:00:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I use windows and my ollama version is 0.3.13,and I can't update

this is the log files,I use clash for windows,how can I solve it?
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7787/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7767
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7767/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7767/comments
|
https://api.github.com/repos/ollama/ollama/issues/7767/events
|
https://github.com/ollama/ollama/pull/7767
| 2,676,975,851
|
PR_kwDOJ0Z1Ps6CkuCF
| 7,767
|
KV Cache Fixes
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-20T19:32:48
| 2024-11-20T20:49:26
| 2024-11-20T20:49:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7767",
"html_url": "https://github.com/ollama/ollama/pull/7767",
"diff_url": "https://github.com/ollama/ollama/pull/7767.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7767.patch",
"merged_at": "2024-11-20T20:49:25"
}
|
Users have reported a number of errors related to the KV cache such as:
- Error: "could not find a KV slot for the batch - try reducing the size of the batch or increase the context. code: 1"
- Hanging due to infinite loops
- Output that ends unexpectedly
- Slower performance than before when passing inputs that are much longer than the context size
This aims to both fix these problems and continue to make this area of the code less error prone.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7767/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3076
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3076/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3076/comments
|
https://api.github.com/repos/ollama/ollama/issues/3076/events
|
https://github.com/ollama/ollama/pull/3076
| 2,181,471,764
|
PR_kwDOJ0Z1Ps5pXpFU
| 3,076
|
Add Japanese translation of documentation
|
{
"login": "jesseclin",
"id": 34976014,
"node_id": "MDQ6VXNlcjM0OTc2MDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/34976014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jesseclin",
"html_url": "https://github.com/jesseclin",
"followers_url": "https://api.github.com/users/jesseclin/followers",
"following_url": "https://api.github.com/users/jesseclin/following{/other_user}",
"gists_url": "https://api.github.com/users/jesseclin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jesseclin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jesseclin/subscriptions",
"organizations_url": "https://api.github.com/users/jesseclin/orgs",
"repos_url": "https://api.github.com/users/jesseclin/repos",
"events_url": "https://api.github.com/users/jesseclin/events{/privacy}",
"received_events_url": "https://api.github.com/users/jesseclin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-12T12:15:02
| 2024-11-21T08:43:18
| 2024-11-21T08:43:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3076",
"html_url": "https://github.com/ollama/ollama/pull/3076",
"diff_url": "https://github.com/ollama/ollama/pull/3076.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3076.patch",
"merged_at": null
}
|
Follow-up of issue Id #2371:
- README_ja.md: Translation of README.md
- docs/ja/: Translation of docs/...
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3076/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3076/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1525
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1525/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1525/comments
|
https://api.github.com/repos/ollama/ollama/issues/1525/events
|
https://github.com/ollama/ollama/issues/1525
| 2,042,143,992
|
I_kwDOJ0Z1Ps55uKT4
| 1,525
|
Mixtral 8x7B support
|
{
"login": "Baughn",
"id": 45811,
"node_id": "MDQ6VXNlcjQ1ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/45811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Baughn",
"html_url": "https://github.com/Baughn",
"followers_url": "https://api.github.com/users/Baughn/followers",
"following_url": "https://api.github.com/users/Baughn/following{/other_user}",
"gists_url": "https://api.github.com/users/Baughn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Baughn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Baughn/subscriptions",
"organizations_url": "https://api.github.com/users/Baughn/orgs",
"repos_url": "https://api.github.com/users/Baughn/repos",
"events_url": "https://api.github.com/users/Baughn/events{/privacy}",
"received_events_url": "https://api.github.com/users/Baughn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-14T17:22:20
| 2023-12-14T22:54:47
| 2023-12-14T22:54:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Following up on #1477 - llama.cpp now supports Mixtral.
I'd reopen the previous issue, but well, I can't.
|
{
"login": "Baughn",
"id": 45811,
"node_id": "MDQ6VXNlcjQ1ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/45811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Baughn",
"html_url": "https://github.com/Baughn",
"followers_url": "https://api.github.com/users/Baughn/followers",
"following_url": "https://api.github.com/users/Baughn/following{/other_user}",
"gists_url": "https://api.github.com/users/Baughn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Baughn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Baughn/subscriptions",
"organizations_url": "https://api.github.com/users/Baughn/orgs",
"repos_url": "https://api.github.com/users/Baughn/repos",
"events_url": "https://api.github.com/users/Baughn/events{/privacy}",
"received_events_url": "https://api.github.com/users/Baughn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1525/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7971
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7971/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7971/comments
|
https://api.github.com/repos/ollama/ollama/issues/7971/events
|
https://github.com/ollama/ollama/pull/7971
| 2,723,489,062
|
PR_kwDOJ0Z1Ps6EWbkY
| 7,971
|
ADD: OLLAMA_LLM_DEFAULT
|
{
"login": "bet0x",
"id": 778862,
"node_id": "MDQ6VXNlcjc3ODg2Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/778862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bet0x",
"html_url": "https://github.com/bet0x",
"followers_url": "https://api.github.com/users/bet0x/followers",
"following_url": "https://api.github.com/users/bet0x/following{/other_user}",
"gists_url": "https://api.github.com/users/bet0x/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bet0x/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bet0x/subscriptions",
"organizations_url": "https://api.github.com/users/bet0x/orgs",
"repos_url": "https://api.github.com/users/bet0x/repos",
"events_url": "https://api.github.com/users/bet0x/events{/privacy}",
"received_events_url": "https://api.github.com/users/bet0x/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-12-06T16:42:46
| 2024-12-06T16:42:46
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7971",
"html_url": "https://github.com/ollama/ollama/pull/7971",
"diff_url": "https://github.com/ollama/ollama/pull/7971.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7971.patch",
"merged_at": null
}
|
he addition of OLLAMA_LLM_DEFAULT is a significant improvement over API-based model pulls. While Ollama's API does support model pulling, having a default model environment variable streamlines deployment and reduces operational overhead.
This approach aligns with modern DevOps practices by handling model downloads during server startup. It eliminates the need for separate API calls or scripts, ensuring the required model is always available before the service starts handling requests. For teams running Ollama in containers or orchestrated environments, this means simpler configurations and more reliable deployments.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7971/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3541
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3541/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3541/comments
|
https://api.github.com/repos/ollama/ollama/issues/3541/events
|
https://github.com/ollama/ollama/pull/3541
| 2,231,887,136
|
PR_kwDOJ0Z1Ps5sDCX_
| 3,541
|
types/model: init with Name and Digest types
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-08T18:52:24
| 2024-04-10T23:30:19
| 2024-04-10T23:30:05
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3541",
"html_url": "https://github.com/ollama/ollama/pull/3541",
"diff_url": "https://github.com/ollama/ollama/pull/3541.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3541.patch",
"merged_at": "2024-04-10T23:30:05"
}
| null |
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3541/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2710
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2710/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2710/comments
|
https://api.github.com/repos/ollama/ollama/issues/2710/events
|
https://github.com/ollama/ollama/issues/2710
| 2,151,276,892
|
I_kwDOJ0Z1Ps6AOeFc
| 2,710
|
Quitting taskbar app on Windows doesn't always close `ollama`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-02-23T15:05:13
| 2024-05-02T22:04:48
| 2024-05-02T22:04:48
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2710/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4436
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4436/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4436/comments
|
https://api.github.com/repos/ollama/ollama/issues/4436/events
|
https://github.com/ollama/ollama/pull/4436
| 2,296,277,894
|
PR_kwDOJ0Z1Ps5vcgVZ
| 4,436
|
return on part done
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-14T19:41:45
| 2024-05-16T00:16:25
| 2024-05-16T00:16:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4436",
"html_url": "https://github.com/ollama/ollama/pull/4436",
"diff_url": "https://github.com/ollama/ollama/pull/4436.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4436.patch",
"merged_at": "2024-05-16T00:16:25"
}
|
only copy as much as we're expecting to receive to prevent runaway downloads
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4436/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3707
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3707/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3707/comments
|
https://api.github.com/repos/ollama/ollama/issues/3707/events
|
https://github.com/ollama/ollama/issues/3707
| 2,248,988,902
|
I_kwDOJ0Z1Ps6GDNjm
| 3,707
|
what is the difference in this two models i think this is a bug
|
{
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-04-17T18:40:46
| 2024-04-17T19:35:47
| 2024-04-17T18:55:21
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?


One 8x7b is 4.1 gb and another is 26 gb what is the difference as both are 8x7b of mixtral model can anyone fix this. If this is a bug.
### How should we solve this?
_No response_
### What is the impact of not solving this?
_No response_
### Anything else?
_No response_
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3707/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5586
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5586/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5586/comments
|
https://api.github.com/repos/ollama/ollama/issues/5586/events
|
https://github.com/ollama/ollama/issues/5586
| 2,399,566,579
|
I_kwDOJ0Z1Ps6PBnrz
| 5,586
|
version 0.2.1 error occurs when calling qwen-agent, but work normally in version 0.1.47
|
{
"login": "bjfk2006",
"id": 6290119,
"node_id": "MDQ6VXNlcjYyOTAxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6290119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bjfk2006",
"html_url": "https://github.com/bjfk2006",
"followers_url": "https://api.github.com/users/bjfk2006/followers",
"following_url": "https://api.github.com/users/bjfk2006/following{/other_user}",
"gists_url": "https://api.github.com/users/bjfk2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bjfk2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bjfk2006/subscriptions",
"organizations_url": "https://api.github.com/users/bjfk2006/orgs",
"repos_url": "https://api.github.com/users/bjfk2006/repos",
"events_url": "https://api.github.com/users/bjfk2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/bjfk2006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-07-10T02:15:16
| 2024-07-11T08:45:05
| 2024-07-10T02:53:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
gpu:cuda12.5+V100
model:qwen2:7b-instruct-q8_0
ollama: 0.2.1
code: https://github.com/QwenLM/Qwen-Agent
error info:
Jul 10 10:06:15 VM-77-13-ubuntu ollama[481292]: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda/template-instances/../mmq.cuh:2422: ERROR: CUDA kernel mul_mat_q has no device code compatible with CUDA arch 700. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__
Jul 10 10:06:15 VM-77-13-ubuntu ollama[481292]: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda/template-instances/../mmq.cuh:2422: ERROR: CUDA kernel mul_mat_q has no device code compatible with CUDA arch 700. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__
Jul 10 10:06:15 VM-77-13-ubuntu ollama[481292]: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda/template-instances/../mmq.cuh:2422: ERROR: CUDA kernel mul_mat_q has no device code compatible with CUDA arch 700. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__
Jul 10 10:06:15 VM-77-13-ubuntu ollama[1057]: ggml_cuda_compute_forward: SILU failed
Jul 10 10:06:15 VM-77-13-ubuntu ollama[1057]: CUDA error: unspecified launch failure
Jul 10 10:06:15 VM-77-13-ubuntu ollama[1057]: current device: 0, in function ggml_cuda_compute_forward at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2283
Jul 10 10:06:15 VM-77-13-ubuntu ollama[1057]: err
Jul 10 10:06:15 VM-77-13-ubuntu ollama[1057]: GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:100: !"CUDA error"
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.1
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5586/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1913
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1913/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1913/comments
|
https://api.github.com/repos/ollama/ollama/issues/1913/events
|
https://github.com/ollama/ollama/issues/1913
| 2,075,351,707
|
I_kwDOJ0Z1Ps57s1qb
| 1,913
|
0.1.19 no longer uses my nvidia cards
|
{
"login": "skrew",
"id": 738170,
"node_id": "MDQ6VXNlcjczODE3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/738170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skrew",
"html_url": "https://github.com/skrew",
"followers_url": "https://api.github.com/users/skrew/followers",
"following_url": "https://api.github.com/users/skrew/following{/other_user}",
"gists_url": "https://api.github.com/users/skrew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skrew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skrew/subscriptions",
"organizations_url": "https://api.github.com/users/skrew/orgs",
"repos_url": "https://api.github.com/users/skrew/repos",
"events_url": "https://api.github.com/users/skrew/events{/privacy}",
"received_events_url": "https://api.github.com/users/skrew/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-01-10T22:50:47
| 2024-01-12T09:10:08
| 2024-01-12T09:10:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
worked on 0.1.18.
Logs from 0.1.19:
```
➜ ~ ollama serve
2024/01/10 22:35:20 images.go:808: total blobs: 5
2024/01/10 22:35:20 images.go:815: total unused blobs removed: 0
2024/01/10 22:35:20 routes.go:930: Listening on 127.0.0.1:11434 (version 0.1.19)
2024/01/10 22:35:21 shim_ext_server.go:142: Dynamic LLM variants [cuda rocm]
2024/01/10 22:35:21 gpu.go:35: Detecting GPU type
2024/01/10 22:35:21 gpu.go:54: Nvidia GPU detected
2024/01/10 22:35:21 gpu.go:84: CUDA Compute Capability detected: 6.1
size 49625198848
filetype Q8_0
architecture llama
type 47B
name gguf
embd 4096
head 32
head_kv 8
gqa 4
2024/01/10 22:35:26 gpu.go:84: CUDA Compute Capability detected: 6.1
2024/01/10 22:35:26 llm.go:70: system memory bytes: 0
2024/01/10 22:35:26 llm.go:71: required model bytes: 49625198848
2024/01/10 22:35:26 llm.go:72: required kv bytes: 268435456
2024/01/10 22:35:26 llm.go:73: required alloc bytes: 178956970
2024/01/10 22:35:26 llm.go:74: required total bytes: 50072591274
2024/01/10 22:35:26 gpu.go:84: CUDA Compute Capability detected: 6.1
2024/01/10 22:35:26 llm.go:105: not enough vram available, falling back to CPU only
2024/01/10 22:35:26 ext_server_common.go:136: Initializing internal llama server
```
Logs from 0.1.18:
```
2024/01/10 22:39:02 images.go:834: total blobs: 5
2024/01/10 22:39:02 images.go:841: total unused blobs removed: 0
2024/01/10 22:39:02 routes.go:929: Listening on 127.0.0.1:11434 (version 0.1.18)
2024/01/10 22:39:02 shim_ext_server.go:142: Dynamic LLM variants [rocm cuda]
2024/01/10 22:39:02 gpu.go:34: Detecting GPU type
2024/01/10 22:39:02 gpu.go:53: Nvidia GPU detected
...
Lazy loading /tmp/ollama314200454/cuda/libext_server.so library
2024/01/10 22:39:06 shim_ext_server.go:92: Loading Dynamic Shim llm server: /tmp/ollama314200454/cuda/libext_server.so
2024/01/10 22:39:06 gpu.go:146: 81110 MB VRAM available, loading up to 40 cuda GPU layers out of 32
2024/01/10 22:39:06 ext_server_common.go:143: Initializing internal llama server
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 10 CUDA devices:
Device 0: NVIDIA GeForce GTX 1070, compute capability 6.1
Device 1: NVIDIA GeForce GTX 1070, compute capability 6.1
Device 2: NVIDIA GeForce GTX 1070, compute capability 6.1
Device 3: NVIDIA GeForce GTX 1070, compute capability 6.1
Device 4: NVIDIA GeForce GTX 1070, compute capability 6.1
Device 5: NVIDIA GeForce GTX 1070, compute capability 6.1
Device 6: NVIDIA GeForce GTX 1070, compute capability 6.1
Device 7: NVIDIA GeForce GTX 1070, compute capability 6.1
Device 8: NVIDIA GeForce GTX 1070, compute capability 6.1
Device 9: NVIDIA GeForce GTX 1070, compute capability 6.1
llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from (version GGUF V3 (latest))
...
llm_load_tensors: ggml ctx size = 0.38 MiB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 133.19 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: VRAM used: 47191.83 MiB
```
|
{
"login": "skrew",
"id": 738170,
"node_id": "MDQ6VXNlcjczODE3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/738170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skrew",
"html_url": "https://github.com/skrew",
"followers_url": "https://api.github.com/users/skrew/followers",
"following_url": "https://api.github.com/users/skrew/following{/other_user}",
"gists_url": "https://api.github.com/users/skrew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skrew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skrew/subscriptions",
"organizations_url": "https://api.github.com/users/skrew/orgs",
"repos_url": "https://api.github.com/users/skrew/repos",
"events_url": "https://api.github.com/users/skrew/events{/privacy}",
"received_events_url": "https://api.github.com/users/skrew/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1913/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4037
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4037/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4037/comments
|
https://api.github.com/repos/ollama/ollama/issues/4037/events
|
https://github.com/ollama/ollama/pull/4037
| 2,270,293,277
|
PR_kwDOJ0Z1Ps5uFOjc
| 4,037
|
Update langchainpy.md
|
{
"login": "Cephra",
"id": 7629358,
"node_id": "MDQ6VXNlcjc2MjkzNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7629358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cephra",
"html_url": "https://github.com/Cephra",
"followers_url": "https://api.github.com/users/Cephra/followers",
"following_url": "https://api.github.com/users/Cephra/following{/other_user}",
"gists_url": "https://api.github.com/users/Cephra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cephra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cephra/subscriptions",
"organizations_url": "https://api.github.com/users/Cephra/orgs",
"repos_url": "https://api.github.com/users/Cephra/repos",
"events_url": "https://api.github.com/users/Cephra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cephra/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-30T01:02:01
| 2024-04-30T09:08:21
| 2024-04-30T03:19:06
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4037",
"html_url": "https://github.com/ollama/ollama/pull/4037",
"diff_url": "https://github.com/ollama/ollama/pull/4037.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4037.patch",
"merged_at": "2024-04-30T03:19:06"
}
|
Updated the code a bit since it was showing deprecation messages for me.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4037/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2099
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2099/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2099/comments
|
https://api.github.com/repos/ollama/ollama/issues/2099/events
|
https://github.com/ollama/ollama/pull/2099
| 2,091,289,006
|
PR_kwDOJ0Z1Ps5klaRF
| 2,099
|
Switch to local dlopen symbols
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-19T20:00:11
| 2024-01-19T20:22:07
| 2024-01-19T20:22:04
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2099",
"html_url": "https://github.com/ollama/ollama/pull/2099",
"diff_url": "https://github.com/ollama/ollama/pull/2099.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2099.patch",
"merged_at": "2024-01-19T20:22:04"
}
|
Fixes #2066
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2099/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1477
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1477/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1477/comments
|
https://api.github.com/repos/ollama/ollama/issues/1477/events
|
https://github.com/ollama/ollama/issues/1477
| 2,036,841,181
|
I_kwDOJ0Z1Ps55Z7rd
| 1,477
|
Mixtral 8X7B
|
{
"login": "pdavis68",
"id": 2781885,
"node_id": "MDQ6VXNlcjI3ODE4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2781885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdavis68",
"html_url": "https://github.com/pdavis68",
"followers_url": "https://api.github.com/users/pdavis68/followers",
"following_url": "https://api.github.com/users/pdavis68/following{/other_user}",
"gists_url": "https://api.github.com/users/pdavis68/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdavis68/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdavis68/subscriptions",
"organizations_url": "https://api.github.com/users/pdavis68/orgs",
"repos_url": "https://api.github.com/users/pdavis68/repos",
"events_url": "https://api.github.com/users/pdavis68/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdavis68/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-12T02:07:06
| 2023-12-12T02:18:03
| 2023-12-12T02:07:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have read that Mixtral 8X7B requires a PR from llama.cpp (https://github.com/ggerganov/llama.cpp/pull/4406) according to this source: (https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF).
Are there any plans yet to incorporate these changes? Is there a timeline? Mixtral 8X7B looks very impressive (appears to outperform LLaMA 2 70B in most benchmarks.) and I'd love to get it into Ollama!
Here's Mistral's page on it: https://mistral.ai/news/mixtral-of-experts/
|
{
"login": "pdavis68",
"id": 2781885,
"node_id": "MDQ6VXNlcjI3ODE4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2781885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdavis68",
"html_url": "https://github.com/pdavis68",
"followers_url": "https://api.github.com/users/pdavis68/followers",
"following_url": "https://api.github.com/users/pdavis68/following{/other_user}",
"gists_url": "https://api.github.com/users/pdavis68/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdavis68/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdavis68/subscriptions",
"organizations_url": "https://api.github.com/users/pdavis68/orgs",
"repos_url": "https://api.github.com/users/pdavis68/repos",
"events_url": "https://api.github.com/users/pdavis68/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdavis68/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1477/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1477/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7059
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7059/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7059/comments
|
https://api.github.com/repos/ollama/ollama/issues/7059/events
|
https://github.com/ollama/ollama/issues/7059
| 2,558,670,041
|
I_kwDOJ0Z1Ps6YgjTZ
| 7,059
|
Have Ollama support the commands /exit and /quit.
|
{
"login": "bulrush15",
"id": 7031486,
"node_id": "MDQ6VXNlcjcwMzE0ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7031486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bulrush15",
"html_url": "https://github.com/bulrush15",
"followers_url": "https://api.github.com/users/bulrush15/followers",
"following_url": "https://api.github.com/users/bulrush15/following{/other_user}",
"gists_url": "https://api.github.com/users/bulrush15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bulrush15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bulrush15/subscriptions",
"organizations_url": "https://api.github.com/users/bulrush15/orgs",
"repos_url": "https://api.github.com/users/bulrush15/repos",
"events_url": "https://api.github.com/users/bulrush15/events{/privacy}",
"received_events_url": "https://api.github.com/users/bulrush15/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-10-01T09:24:36
| 2024-10-01T22:54:08
| 2024-10-01T22:54:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have Ollama 0.3.12. It's a fun tool! Thank you so much!
Can you have it support commands for `/exit` and `/quit`? They will do the same as `/bye`.
Thanks!
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7059/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7942
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7942/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7942/comments
|
https://api.github.com/repos/ollama/ollama/issues/7942/events
|
https://github.com/ollama/ollama/issues/7942
| 2,719,196,047
|
I_kwDOJ0Z1Ps6iE6OP
| 7,942
|
model requires more system memory than is available when useMmap
|
{
"login": "xgdgsc",
"id": 1189869,
"node_id": "MDQ6VXNlcjExODk4Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1189869?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xgdgsc",
"html_url": "https://github.com/xgdgsc",
"followers_url": "https://api.github.com/users/xgdgsc/followers",
"following_url": "https://api.github.com/users/xgdgsc/following{/other_user}",
"gists_url": "https://api.github.com/users/xgdgsc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xgdgsc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xgdgsc/subscriptions",
"organizations_url": "https://api.github.com/users/xgdgsc/orgs",
"repos_url": "https://api.github.com/users/xgdgsc/repos",
"events_url": "https://api.github.com/users/xgdgsc/events{/privacy}",
"received_events_url": "https://api.github.com/users/xgdgsc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 3
| 2024-12-05T03:01:23
| 2025-01-14T05:12:53
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I use continue vscode extension to call ollama config like
```
{
"model": "qwen2.5-coder:14b",
"title": "qwen2.5-coder:14b",
"provider": "ollama",
"completionOptions": {
"keepAlive": 9999999,
"useMmap": true
}
},
```
It still checks system memory disregard the `"useMmap": true` option. And return 500 internal error like:
```
{"error":"model requires more system memory (17.7 GiB) than is available (13.6 GiB)"}
```
### OS
Windows
### GPU
_No response_
### CPU
Other
### Ollama version
0.4.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7942/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7942/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7828
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7828/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7828/comments
|
https://api.github.com/repos/ollama/ollama/issues/7828/events
|
https://github.com/ollama/ollama/pull/7828
| 2,690,433,255
|
PR_kwDOJ0Z1Ps6DBgDJ
| 7,828
|
Easily see version without needing to go to command line
|
{
"login": "tagroup",
"id": 1417944,
"node_id": "MDQ6VXNlcjE0MTc5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1417944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tagroup",
"html_url": "https://github.com/tagroup",
"followers_url": "https://api.github.com/users/tagroup/followers",
"following_url": "https://api.github.com/users/tagroup/following{/other_user}",
"gists_url": "https://api.github.com/users/tagroup/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tagroup/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tagroup/subscriptions",
"organizations_url": "https://api.github.com/users/tagroup/orgs",
"repos_url": "https://api.github.com/users/tagroup/repos",
"events_url": "https://api.github.com/users/tagroup/events{/privacy}",
"received_events_url": "https://api.github.com/users/tagroup/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-11-25T11:33:48
| 2024-11-26T17:58:56
| 2024-11-26T17:58:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7828",
"html_url": "https://github.com/ollama/ollama/pull/7828",
"diff_url": "https://github.com/ollama/ollama/pull/7828.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7828.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7828/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3211
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3211/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3211/comments
|
https://api.github.com/repos/ollama/ollama/issues/3211/events
|
https://github.com/ollama/ollama/issues/3211
| 2,191,135,613
|
I_kwDOJ0Z1Ps6CmhN9
| 3,211
|
GPU Not detected on kubernetes - works localy
|
{
"login": "didlawowo",
"id": 12622760,
"node_id": "MDQ6VXNlcjEyNjIyNzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/12622760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/didlawowo",
"html_url": "https://github.com/didlawowo",
"followers_url": "https://api.github.com/users/didlawowo/followers",
"following_url": "https://api.github.com/users/didlawowo/following{/other_user}",
"gists_url": "https://api.github.com/users/didlawowo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/didlawowo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/didlawowo/subscriptions",
"organizations_url": "https://api.github.com/users/didlawowo/orgs",
"repos_url": "https://api.github.com/users/didlawowo/repos",
"events_url": "https://api.github.com/users/didlawowo/events{/privacy}",
"received_events_url": "https://api.github.com/users/didlawowo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-03-18T03:10:56
| 2024-04-12T22:02:48
| 2024-04-12T22:02:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
i have cluster kubernetes with 4070 Super GPU
On kubernetes cluster into container ollama doesn't detect gpu, but it work if i am running ollam directly on the node who have the gpu
stream logs failed container "ollama" in pod "ollama-74fbf7d68b-lglf9" is waiting to start: ContainerCreating for ollama/ollama-74fbf7d68b-lglf9 (ollama)
time=2024-03-18T03:00:29.503Z level=INFO source=images.go:806 msg="total blobs: 0"
time=2024-03-18T03:00:29.515Z level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-18T03:00:29.515Z level=INFO source=routes.go:1110 msg="Listening on :11434 (version 0.1.29)"
time=2024-03-18T03:00:29.516Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2476510653/runners ..."
time=2024-03-18T03:00:31.661Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [rocm_v60000 cpu_avx2 cpu cpu_avx cuda_v11]"
time=2024-03-18T03:00:31.661Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-18T03:00:31.661Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-18T03:00:31.668Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: ]"
time=2024-03-18T03:00:31.668Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-18T03:00:31.668Z level=INFO source=routes.go:1133 msg="no GPU detected"
### What did you expect to see?
i expect to see my gpu discovered by ollama, cause my gpu is correctly present in the cluster (python whisper work)
### Steps to reproduce
deploy olama with helm on kubernetes
with this parameter
chart: ollama
repoURL: https://otwld.github.io/ollama-helm/
targetRevision: 0.19.0
helm:
values: |
image:
repository: fizzbuzz2/ollama
tag: latest
pullPolicy: Always
imagePullSecrets:
- name: registry-credentials
runtimeClass: nvidia
extraEnv:
- name: NVIDIA_VISIBLE_DEVICES
value: all
- name: NVARCH
value: x86_64
- name: NV_CUDA_CUDART_VERSION
value: 12.3.2
- name: NVIDIA_DRIVER_CAPABILITIES
value: all
# extraArgs:
# - --gpu=all
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 2
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
ollama:
gpu:
enabled: true
number: 3
models:
- mistral
- codellama
- llava
### Are there any recent changes that introduced the issue?
never function before
i have nvidia smi on the node, as you can see, the ollama local into the node is here, and the whisper from kubernetes too. but not the ollama from kubernetes.
cluster@nvidia:~$ nvidia-smi
Mon Mar 18 04:02:36 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14 Driver Version: 550.54.14 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4070 ... Off | 00000000:05:00.0 Off | N/A |
| 0% 42C P8 6W / 220W | 5048MiB / 12282MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1201 C /usr/local/bin/ollama 454MiB |
| 0 N/A N/A 61256 C /usr/bin/python3 4586MiB |
+-----------------------------------------------------------------------------------------+
cluster@nvidia:~$ nvidia-container-toolkit -version
NVIDIA Container Runtime Hook version 1.14.6
commit: 5605d191332dcfeea802c4497360d60a65c7887e
resource allocation works:
➜ src git:(main) kubectl view-allocations -r gpu
Alias tip: kub view-allocations -r gpu
Resource Requested Limit Allocatable Free
nvidia.com/gpu (50%) 4.0 (50%) 4.0 8.0 4.0
└─ nvidia (50%) 4.0 (50%) 4.0 8.0 4.0
├─ ollama-74fbf7d68b-lglf9 3.0 3.0 __ __
└─ whisper-api-68cc9d4565-s7wr7 1.0 1.0 __ __
### OS
Linux
### Architecture
amd64
### Platform
Docker
### Ollama version
latest
### GPU
_No response_
### GPU info
rtx 4070 super
### CPU
AMD
### Other software
kubernetes with k3S and nvidia driver install manualy on ubuntu 23.10
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3211/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5320
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5320/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5320/comments
|
https://api.github.com/repos/ollama/ollama/issues/5320/events
|
https://github.com/ollama/ollama/pull/5320
| 2,377,620,652
|
PR_kwDOJ0Z1Ps5zvpo1
| 5,320
|
Update faq.md
|
{
"login": "Dino-Burger",
"id": 56079246,
"node_id": "MDQ6VXNlcjU2MDc5MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/56079246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dino-Burger",
"html_url": "https://github.com/Dino-Burger",
"followers_url": "https://api.github.com/users/Dino-Burger/followers",
"following_url": "https://api.github.com/users/Dino-Burger/following{/other_user}",
"gists_url": "https://api.github.com/users/Dino-Burger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dino-Burger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dino-Burger/subscriptions",
"organizations_url": "https://api.github.com/users/Dino-Burger/orgs",
"repos_url": "https://api.github.com/users/Dino-Burger/repos",
"events_url": "https://api.github.com/users/Dino-Burger/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dino-Burger/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-27T09:19:32
| 2024-08-14T17:15:41
| 2024-08-14T17:15:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5320",
"html_url": "https://github.com/ollama/ollama/pull/5320",
"diff_url": "https://github.com/ollama/ollama/pull/5320.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5320.patch",
"merged_at": null
}
|
Adding instructions on how to set environment variable when Ollama is _not_ run as a service.
It took ma a while to find out how to do this... :-)
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5320/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6791
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6791/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6791/comments
|
https://api.github.com/repos/ollama/ollama/issues/6791/events
|
https://github.com/ollama/ollama/issues/6791
| 2,524,410,174
|
I_kwDOJ0Z1Ps6Wd3E-
| 6,791
|
Occasionally getting a 500 response and 'ollama._types.ResponseError: health resp' seemingly out of nowhere
|
{
"login": "danielj0nes",
"id": 32555231,
"node_id": "MDQ6VXNlcjMyNTU1MjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/32555231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielj0nes",
"html_url": "https://github.com/danielj0nes",
"followers_url": "https://api.github.com/users/danielj0nes/followers",
"following_url": "https://api.github.com/users/danielj0nes/following{/other_user}",
"gists_url": "https://api.github.com/users/danielj0nes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielj0nes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielj0nes/subscriptions",
"organizations_url": "https://api.github.com/users/danielj0nes/orgs",
"repos_url": "https://api.github.com/users/danielj0nes/repos",
"events_url": "https://api.github.com/users/danielj0nes/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielj0nes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-09-13T09:50:07
| 2024-09-13T09:50:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello, I am running a Python server that receives and sends requests to an instance of Ollama (with the Llama 3.1 model).
When lots of requests are sent at once, I occasionally receive a 500 response from the Ollama server which causes the process to crash. The error I get from the Python Ollama module is as follows:
```Traceback (most recent call last):
File "ollama\_client.py", line 407, in generate
File "ollama\_client.py", line 378, in _request_stream
File "ollama\_client.py", line 348, in _request
ollama._types.ResponseError: health resp: Get "http://127.0.0.1:61519/health": dial tcp 127.0.0.1:61519: connectex: Only one usage of each socket address (protocol/network address/port) is normally permitted.
```
I am not trying to do anything else with Ollama whilst requests to generate are being sent.
Is there something in Ollama that is automatically attempting to bind this port? Can I somehow just disable this '/health' endpoint?
Thanks in advance.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.10
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6791/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6791/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3820
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3820/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3820/comments
|
https://api.github.com/repos/ollama/ollama/issues/3820/events
|
https://github.com/ollama/ollama/issues/3820
| 2,256,421,835
|
I_kwDOJ0Z1Ps6GfkPL
| 3,820
|
TLS handshake timeout when pulling models
|
{
"login": "Shzyhao",
"id": 77272241,
"node_id": "MDQ6VXNlcjc3MjcyMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/77272241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shzyhao",
"html_url": "https://github.com/Shzyhao",
"followers_url": "https://api.github.com/users/Shzyhao/followers",
"following_url": "https://api.github.com/users/Shzyhao/following{/other_user}",
"gists_url": "https://api.github.com/users/Shzyhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shzyhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shzyhao/subscriptions",
"organizations_url": "https://api.github.com/users/Shzyhao/orgs",
"repos_url": "https://api.github.com/users/Shzyhao/repos",
"events_url": "https://api.github.com/users/Shzyhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shzyhao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-22T12:33:19
| 2024-05-09T21:06:48
| 2024-05-09T21:06:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
使用控制台下载模型的时候,大部分时间速度很慢且会提示TLS handshake timeout,电脑的正常网络速度为50M/S,但是在下载模型的时候大部分时间速度都是300K/S以下,所有的模型都是这样的情况
When using the console to download the model, most of the time the speed is very slow and will prompt TLS handshake timeout, the normal network speed of the computer is 50M/S, but most of the time when downloading the model, the speed is below 300K/s, all models are like this

### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3820/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2117
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2117/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2117/comments
|
https://api.github.com/repos/ollama/ollama/issues/2117/events
|
https://github.com/ollama/ollama/pull/2117
| 2,092,297,145
|
PR_kwDOJ0Z1Ps5kozIt
| 2,117
|
Unlock mutex when failing to load model
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-21T01:06:59
| 2024-01-21T01:54:47
| 2024-01-21T01:54:46
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2117",
"html_url": "https://github.com/ollama/ollama/pull/2117",
"diff_url": "https://github.com/ollama/ollama/pull/2117.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2117.patch",
"merged_at": "2024-01-21T01:54:46"
}
|
Avoids `ollama serve` hanging with `concurrent llm servers not yet supported, waiting for prior server to complete` when a model fails to load
I believe this also fixes https://github.com/jmorganca/ollama/issues/1641
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2117/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8620
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8620/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8620/comments
|
https://api.github.com/repos/ollama/ollama/issues/8620/events
|
https://github.com/ollama/ollama/issues/8620
| 2,814,178,540
|
I_kwDOJ0Z1Ps6nvPTs
| 8,620
|
Add support fo Qwen 2.5 VL models (3B, 7B and 32B) instruct versions
|
{
"login": "YarvixPA",
"id": 152553832,
"node_id": "U_kgDOCRfJaA",
"avatar_url": "https://avatars.githubusercontent.com/u/152553832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YarvixPA",
"html_url": "https://github.com/YarvixPA",
"followers_url": "https://api.github.com/users/YarvixPA/followers",
"following_url": "https://api.github.com/users/YarvixPA/following{/other_user}",
"gists_url": "https://api.github.com/users/YarvixPA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YarvixPA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YarvixPA/subscriptions",
"organizations_url": "https://api.github.com/users/YarvixPA/orgs",
"repos_url": "https://api.github.com/users/YarvixPA/repos",
"events_url": "https://api.github.com/users/YarvixPA/events{/privacy}",
"received_events_url": "https://api.github.com/users/YarvixPA/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 9
| 2025-01-27T22:20:04
| 2025-01-30T11:41:36
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, now hours after being released, I would like to suggest that you add support for the Qwen2.5-VL models.
**[Qwen2.5-VL - Hugginface collection](https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5)**
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8620/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8620/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3447
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3447/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3447/comments
|
https://api.github.com/repos/ollama/ollama/issues/3447/events
|
https://github.com/ollama/ollama/pull/3447
| 2,219,599,627
|
PR_kwDOJ0Z1Ps5rY3bf
| 3,447
|
upgrade langchain for python-privategpt example
|
{
"login": "guanlisheng",
"id": 721973,
"node_id": "MDQ6VXNlcjcyMTk3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/721973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guanlisheng",
"html_url": "https://github.com/guanlisheng",
"followers_url": "https://api.github.com/users/guanlisheng/followers",
"following_url": "https://api.github.com/users/guanlisheng/following{/other_user}",
"gists_url": "https://api.github.com/users/guanlisheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guanlisheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guanlisheng/subscriptions",
"organizations_url": "https://api.github.com/users/guanlisheng/orgs",
"repos_url": "https://api.github.com/users/guanlisheng/repos",
"events_url": "https://api.github.com/users/guanlisheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/guanlisheng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-02T05:35:55
| 2024-11-21T09:27:26
| 2024-11-21T09:27:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3447",
"html_url": "https://github.com/ollama/ollama/pull/3447",
"diff_url": "https://github.com/ollama/ollama/pull/3447.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3447.patch",
"merged_at": null
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3447/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6778
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6778/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6778/comments
|
https://api.github.com/repos/ollama/ollama/issues/6778/events
|
https://github.com/ollama/ollama/issues/6778
| 2,523,213,492
|
I_kwDOJ0Z1Ps6WZS60
| 6,778
|
Would be nice to have a "continue last message" option with the `/api/chat` endpoint
|
{
"login": "hammer-ai",
"id": 143602265,
"node_id": "U_kgDOCI8yWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/143602265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hammer-ai",
"html_url": "https://github.com/hammer-ai",
"followers_url": "https://api.github.com/users/hammer-ai/followers",
"following_url": "https://api.github.com/users/hammer-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/hammer-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hammer-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hammer-ai/subscriptions",
"organizations_url": "https://api.github.com/users/hammer-ai/orgs",
"repos_url": "https://api.github.com/users/hammer-ai/repos",
"events_url": "https://api.github.com/users/hammer-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hammer-ai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-12T19:38:21
| 2024-09-13T05:22:52
| 2024-09-13T05:22:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi there, it would be nice to have a "continue last message" option with the `/api/chat` endpoint. Thanks!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6778/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3433
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3433/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3433/comments
|
https://api.github.com/repos/ollama/ollama/issues/3433/events
|
https://github.com/ollama/ollama/issues/3433
| 2,217,424,110
|
I_kwDOJ0Z1Ps6EKzTu
| 3,433
|
添加chatglm3-6b-128k模型
|
{
"login": "wantong-lab",
"id": 60781328,
"node_id": "MDQ6VXNlcjYwNzgxMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/60781328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wantong-lab",
"html_url": "https://github.com/wantong-lab",
"followers_url": "https://api.github.com/users/wantong-lab/followers",
"following_url": "https://api.github.com/users/wantong-lab/following{/other_user}",
"gists_url": "https://api.github.com/users/wantong-lab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wantong-lab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wantong-lab/subscriptions",
"organizations_url": "https://api.github.com/users/wantong-lab/orgs",
"repos_url": "https://api.github.com/users/wantong-lab/repos",
"events_url": "https://api.github.com/users/wantong-lab/events{/privacy}",
"received_events_url": "https://api.github.com/users/wantong-lab/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2024-04-01T03:12:10
| 2024-04-20T11:57:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
https://huggingface.co/THUDM/chatglm3-6b-128k
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3433/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4473
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4473/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4473/comments
|
https://api.github.com/repos/ollama/ollama/issues/4473/events
|
https://github.com/ollama/ollama/issues/4473
| 2,300,175,577
|
I_kwDOJ0Z1Ps6JGeTZ
| 4,473
|
InternVL
|
{
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enryteam/followers",
"following_url": "https://api.github.com/users/enryteam/following{/other_user}",
"gists_url": "https://api.github.com/users/enryteam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enryteam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enryteam/subscriptions",
"organizations_url": "https://api.github.com/users/enryteam/orgs",
"repos_url": "https://api.github.com/users/enryteam/repos",
"events_url": "https://api.github.com/users/enryteam/events{/privacy}",
"received_events_url": "https://api.github.com/users/enryteam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 4
| 2024-05-16T11:43:14
| 2025-01-28T13:32:42
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/OpenGVLab/InternVL
thanks
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4473/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8148
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8148/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8148/comments
|
https://api.github.com/repos/ollama/ollama/issues/8148/events
|
https://github.com/ollama/ollama/pull/8148
| 2,746,392,142
|
PR_kwDOJ0Z1Ps6Fkplo
| 8,148
|
Add support for applying control vectors in gguf format [Rebased on v0.5.7]
|
{
"login": "itszn",
"id": 1857794,
"node_id": "MDQ6VXNlcjE4NTc3OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1857794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itszn",
"html_url": "https://github.com/itszn",
"followers_url": "https://api.github.com/users/itszn/followers",
"following_url": "https://api.github.com/users/itszn/following{/other_user}",
"gists_url": "https://api.github.com/users/itszn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itszn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itszn/subscriptions",
"organizations_url": "https://api.github.com/users/itszn/orgs",
"repos_url": "https://api.github.com/users/itszn/repos",
"events_url": "https://api.github.com/users/itszn/events{/privacy}",
"received_events_url": "https://api.github.com/users/itszn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 2
| 2024-12-18T00:11:08
| 2025-01-17T00:52:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8148",
"html_url": "https://github.com/ollama/ollama/pull/8148",
"diff_url": "https://github.com/ollama/ollama/pull/8148.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8148.patch",
"merged_at": null
}
|
Current Supported Ollama Release Version: v0.5.7
Control Vectors allow for changing the behavior of a model by steering towards or away from a specific behavior.
You can learn more about them from these sources:
https://hlfshell.ai/posts/representation-engineering/
https://vgel.me/posts/representation-engineering/
Earlier this year https://github.com/ggerganov/llama.cpp/pull/5970 support for loading and applying control vectors in GGUF format was added to llama.cpp. This pull request exposes that cpp feature via the Modelfile, to make it easy to apply a control vector on top of an existing ollama model and serve it via ollama's api (currently there is no off-the-shelf serving solution which supports control vectors)
To create a train a control vector for a model, you can use this library which includes exporting in the GGUF format: https://github.com/vgel/repeng
# Building
**Note on updating** This branch will update as ollama releases, you can use `git checkout origin/feat/control-vectors` to update your local copy, as git pull will not work due to rebasing.
Until this PR is merged you can use it by cloning my branch and building directly. See these docs on how to build: https://github.com/ollama/ollama/blob/main/docs/development.md#overview
It boils down to installing a few dependencies and then using make
```bash
# Grab this branch if you do not have it already
git clone https://github.com/itszn/ollama
cd ollama
# If you already have it and just want to update
git fetch
git checkout origin/feat/control-vectors
make -j 5
ls -la ./ollama
```
Then you can run the server like normal via the `serve` command (make sure no old versions of ollama are running)
```
./ollama serve
```
I will try to update this PR along side ollama releases, so you hopefully won't miss any features while keeping control vector support 🎉
If you like this feature, leave an emoji ❤️
# Example
Here is an example of how you can use this PR to build and serve a model with a control vector:
## Training the vector
First train a control vector for the given model. In this example I am using https://github.com/vgel/repeng to train off of [`Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3). This takes about ~3 min on my mac mini
```python
from repeng import ControlVector, ControlModel, DatasetEntry
model_name = "mistralai/Mistral-7B-Instruct-v0.3"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
model = ControlModel(model, list(range(-5, -18, -1)))
# Train the control vector with two contrasting writing styles
happy_dataset = make_dataset(
"Act as if you're extremely {persona}.",
["happy", "euphoric"],
["sad", "depressed"],
truncated_output_suffixes_512,
)
model.reset()
happy_vector = ControlVector.train(model, tokenizer, happy_dataset)
happy_vector.export_gguf('/opt/happy.gguf')
```
## Applying the GGUF via Modelfile
### Ensure the base model works with ollama
First we make sure we have a working instance of `Mistral-7B-Instruct-v0.3` in ollama (using correct mistral template) (can be quantized)
You can also use the official ollama images for the model (ie https://ollama.com/library/mistral:7b ) , but make sure it is the same one you trained against, or else it may not work as expected.
<details>
<summary>Minimal Modelfile for `Mistral-7B-Instruct-v0.3` </summary>
```modelfile
FROM .
TEMPLATE """{{- if .Suffix }}[SUFFIX]{{ .Suffix }}[PREFIX] {{ .Prompt }}
{{- else if .Messages }}
{{- range $index, $_ := .Messages }}
{{- if eq .Role "user" }}[INST] {{ if and $.System (eq (len (slice $.Messages $index)) 1) }}{{ $.System }}
{{ end }}{{ .Content }}[/INST]
{{- else if eq .Role "assistant" }} {{ .Content }}</s>
{{- end }}
{{- end }}
{{- else }}[INST] {{ if .System }}{{ .System }}
{{ end }}{{ .Prompt }} [/INST]
{{- end }} {{ .Response }}
{{- if .Response }}</s>
{{- end }}"""
PARAMETER stop [INST]
PARAMETER stop [/INST]
PARAMETER stop [PREFIX]
PARAMETER stop [MIDDLE]
PARAMETER stop [SUFFIX]
```
</details>
```bash
Mistral-7B-Instruct-v0.3 % ./ollama create Mistral-7B-Instruct-v0.3 -f Modelfile
transferring model data 100%
converting model
using existing layer sha256:6cd684e7092d1561237a5de7262a0693940e712dbfa5f4556faf2ee41eec004c
using existing layer sha256:51707752a87ca45dc91470c0e4974028eb50096af69f97d9bef091edcf51a649
using existing layer sha256:5dea4f4d0fffcd67078a5f8fa107312bcf1d7d658cc668631a4fd6b4530a7159
creating new layer sha256:ccfb628e0111a2a7cd07cb19c0fc8c984ed8d72846c6f92e8a1b1b8842e82bb3
writing manifest
success
% ./ollama run Mistral-7B-Instruct-v0.3
>>> how do you feel?
1. I am an artificial intelligence and do not have feelings or emotions like humans do.
```
### Create a new model with the control vector
Now we will define our modified model via a new Modefile (**IMPORTANT** you must provide an absolute path to the control vector gguf file, ie `/home/user/models/happy-mistral/happy.gguf`)
```modelfile
FROM Mistral-7B-Instruct-v0.3
CONTROLVECTOR /opt/happy.gguf
PARAMETER control_strength 0.4
```
```bash
happy-mistral % ./ollama create happy-mistral -f Modelfile
transferring model data
using existing layer sha256:6cd684e7092d1561237a5de7262a0693940e712dbfa5f4556faf2ee41eec004c
using existing layer sha256:51707752a87ca45dc91470c0e4974028eb50096af69f97d9bef091edcf51a649
using existing layer sha256:218dfbcc5cc2b4949ff62cf67ef3707ad5ebbfcc110f35ab5516440775cc1ca5
using existing layer sha256:683c80fc5e2261a899cc69eea9143e4a8fee9194acf4aed2767d0354c862dfe7
creating new layer sha256:1c7084be5b7e84d88d27c2b8ad5542a583d39e2c67b517436ef2d9fcce859ceb
writing manifest
success
% ./ollama run happy-mistral
>>> how do you feel
😃 I'm absolutely thrilled and elated! You did it, dude!
```
Note in the server debug logs, we apply the new control vector with the strength from the `PARAMETER control_strength`
```
time=2024-12-17T15:57:40.248-08:00 level=DEBUG source=runner.go:897 msg="applying control vector" /Users/nyan/.ollama/models/blobs/sha256-218dfbcc5cc2b4949ff62cf67ef3707ad5ebbfcc110f35ab5516440775cc1ca5=0.4000000059604645
```
### Negative strength
We can also use a negative strength in our Modelfile to use the opposite direction of the vector (note not ever vector works well bi-directonally)
```modelfile
FROM Mistral-7B-Instruct-v0.3
CONTROLVECTOR /opt/happy.gguf
PARAMETER control_strength -0.5
```
```
happy-mistral % ollama create sad-mistral -f Modelfile
% ./ollama run sad-mistral
>>> how do you feel
1. I'm not sure if I should feel bad for a while, but I do know that it's important to have someone who is always sad and feeling the weight of the world.
```
# Known Issues / Limitations
- Control vectors must be provided as an absolute path in the Model file (ie `CONTROLVECTOR /home/user/models/happy-mistral/happy.gguf`)
- Vector currently applies to all layers
- `control_strength` is only applied when launching a new runner, not applicable via the API params
- If you re-create a running model with a different `control_strength`, you have to restart that model using `./ollama stop <model>`
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8148/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8148/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6725
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6725/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6725/comments
|
https://api.github.com/repos/ollama/ollama/issues/6725/events
|
https://github.com/ollama/ollama/issues/6725
| 2,516,138,023
|
I_kwDOJ0Z1Ps6V-Tgn
| 6,725
|
Incorrect AppDir when creating banner script (Preview)
|
{
"login": "DJStompZone",
"id": 85457381,
"node_id": "MDQ6VXNlcjg1NDU3Mzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/85457381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DJStompZone",
"html_url": "https://github.com/DJStompZone",
"followers_url": "https://api.github.com/users/DJStompZone/followers",
"following_url": "https://api.github.com/users/DJStompZone/following{/other_user}",
"gists_url": "https://api.github.com/users/DJStompZone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DJStompZone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DJStompZone/subscriptions",
"organizations_url": "https://api.github.com/users/DJStompZone/orgs",
"repos_url": "https://api.github.com/users/DJStompZone/repos",
"events_url": "https://api.github.com/users/DJStompZone/events{/privacy}",
"received_events_url": "https://api.github.com/users/DJStompZone/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-09-10T11:09:58
| 2024-10-30T16:24:32
| 2024-10-30T16:24:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Upon installing Ollama (Preview, 0.3.10.0) on Windows*, I noticed an error in the log file:
```
time=2024-09-10T05:45:41.543-05:00 level=INFO source=logging.go:50 msg="ollama app started"
time=2024-09-10T05:45:41.564-05:00 level=INFO source=store.go:96 msg="wrote store: C:\\Users\\desco\\AppData\\Local\\Ollama\\config.json"
time=2024-09-10T05:45:41.581-05:00 level=INFO source=store.go:96 msg="wrote store: C:\\Users\\desco\\AppData\\Local\\Ollama\\config.json"
time=2024-09-10T05:45:41.582-05:00 level=INFO source=server.go:176 msg="unable to connect to server"
time=2024-09-10T05:45:41.583-05:00 level=INFO source=server.go:135 msg="starting server..."
time=2024-09-10T05:45:41.963-05:00 level=INFO source=server.go:121 msg="started ollama server with pid 7604"
time=2024-09-10T05:45:41.963-05:00 level=INFO source=server.go:123 msg="ollama server logs C:\\Users\\desco\\AppData\\Local\\Ollama\\server.log"
time=2024-09-10T05:45:43.695-05:00 level=WARN source=lifecycle.go:51 msg="Failed to launch getting started shell: getting started banner script error CreateFile C:\\Users\\desco\\AppData\\Local\\Programs\\Ollama\\ollama_welcome.ps1: The system cannot find the path specified."
```
* Note: Installer launched via `ollamasetup.exe /DIR="H:/ollama"`
The app.log file is located in `%LOCALAPPDATA%\Ollama`, but the error message indicates the AppDir is erroneously set `%LOCALAPPDATA%\Programs\Ollama`. It's worth noting that `ollama_welcome.ps1` **was** successfully installed to `H:\ollama\ollama_welcome.ps1`, which correctly follows the user-specified install directory.
Unfortunately, I have little to no proficiency in Go, but I figured I should point it out regardless.
https://github.com/ollama/ollama/blob/83a9b5271a68c7d1f8443f91c8d8b7d24ab581a9/app/lifecycle/getstarted_windows.go#L15
<hr>
### Environment Info
- **OS**: Windows 11 Pro, build 22631
- **CPU**: Ryzen 5 3600
- **GPU**: RTX 4060
- **Installer version**: Preview/0.3.10.0
SHA256: 3BE19A085685324066762F33C46C4A1121F27E7A1EA9B441D0BECF57DBB34375
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6725/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4126
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4126/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4126/comments
|
https://api.github.com/repos/ollama/ollama/issues/4126/events
|
https://github.com/ollama/ollama/issues/4126
| 2,277,659,229
|
I_kwDOJ0Z1Ps6HwlJd
| 4,126
|
Some Ollama models apparently affected by llama.cpp BPE pretokenization issue
|
{
"login": "sealad886",
"id": 155285242,
"node_id": "U_kgDOCUF2-g",
"avatar_url": "https://avatars.githubusercontent.com/u/155285242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sealad886",
"html_url": "https://github.com/sealad886",
"followers_url": "https://api.github.com/users/sealad886/followers",
"following_url": "https://api.github.com/users/sealad886/following{/other_user}",
"gists_url": "https://api.github.com/users/sealad886/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sealad886/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sealad886/subscriptions",
"organizations_url": "https://api.github.com/users/sealad886/orgs",
"repos_url": "https://api.github.com/users/sealad886/repos",
"events_url": "https://api.github.com/users/sealad886/events{/privacy}",
"received_events_url": "https://api.github.com/users/sealad886/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 12
| 2024-05-03T13:13:23
| 2025-01-06T04:49:04
| 2025-01-06T04:49:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
See the following llama.cpp issues/PRs:
* [PR 6920](https://github.com/ggerganov/llama.cpp/pull/6920): llama : improve BPE pre-processing + LLaMA 3 and Deepseek support
* [Issue 7030](https://github.com/ggerganov/llama.cpp/issues/7030): Command-R GGUF conversion no longer working
* [Issue 7040](https://github.com/ggerganov/llama.cpp/issues/7040): Command-R-Plus unable to convert or use after BPE pretokenizer update
* many others regarding various models either spitting jibberish or otherwise not working
Using updated `llama.cpp` builds and having done a little digging under the hood on the BPE issue, this is an example verbose output when starting `ollama serve`:
```
time=2024-05-03T14:01:02.120+01:00 level=INFO source=images.go:828 msg="total blobs: 36"
time=2024-05-03T14:01:02.124+01:00 level=INFO source=images.go:835 msg="total unused blobs removed: 0"
time=2024-05-03T14:01:02.125+01:00 level=INFO source=routes.go:1071 msg="Listening on 127.0.0.1:11434 (version 0.1.33)"
time=2024-05-03T14:01:02.125+01:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/b8/br9qpd7x3md9qcdzps_58h240000gn/T/ollama1317780243/runners
time=2024-05-03T14:01:02.153+01:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]"
time=2024-05-03T14:01:20.990+01:00 level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=41 memory.available="27648.0 MiB" memory.required.full="22869.9 MiB" memory.required.partial="22869.9 MiB" memory.required.kv="2560.0 MiB" memory.weights.total="19281.9 MiB" memory.weights.repeating="17641.2 MiB" memory.weights.nonrepeating="1640.7 MiB" memory.graph.full="516.0 MiB" memory.graph.partial="516.0 MiB"
time=2024-05-03T14:01:20.990+01:00 level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=41 memory.available="27648.0 MiB" memory.required.full="22869.9 MiB" memory.required.partial="22869.9 MiB" memory.required.kv="2560.0 MiB" memory.weights.total="19281.9 MiB" memory.weights.repeating="17641.2 MiB" memory.weights.nonrepeating="1640.7 MiB" memory.graph.full="516.0 MiB" memory.graph.partial="516.0 MiB"
time=2024-05-03T14:01:20.991+01:00 level=INFO source=server.go:289 msg="starting llama server" cmd="/var/folders/b8/br9qpd7x3md9qcdzps_58h240000gn/T/ollama1317780243/runners/metal/ollama_llama_server --model /Users/andrew/.ollama/models/blobs/sha256-8a9611e7bca168be635d39d21927d2b8e7e8ea0b5d0998b7d5980daf1f8d4205 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --parallel 1 --port 62223"
time=2024-05-03T14:01:21.030+01:00 level=INFO source=sched.go:340 msg="loaded runners" count=1
time=2024-05-03T14:01:21.030+01:00 level=INFO source=server.go:432 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2606,"msg":"logging to file is disabled.","tid":"0x1f56dbac0","timestamp":1714741281}
{"build":2770,"commit":"952d03d","function":"main","level":"INFO","line":2823,"msg":"build info","tid":"0x1f56dbac0","timestamp":1714741281}
{"function":"main","level":"INFO","line":2830,"msg":"system info","n_threads":6,"n_threads_batch":-1,"system_info":"AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"0x1f56dbac0","timestamp":1714741281,"total_threads":12}
llama_model_loader: loaded meta data with 23 key-value pairs and 322 tensors from /Users/andrew/.ollama/models/blobs/sha256-8a9611e7bca168be635d39d21927d2b8e7e8ea0b5d0998b7d5980daf1f8d4205 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = command-r
llama_model_loader: - kv 1: general.name str = c4ai-command-r-v01
llama_model_loader: - kv 2: command-r.block_count u32 = 40
llama_model_loader: - kv 3: command-r.context_length u32 = 131072
llama_model_loader: - kv 4: command-r.embedding_length u32 = 8192
llama_model_loader: - kv 5: command-r.feed_forward_length u32 = 22528
llama_model_loader: - kv 6: command-r.attention.head_count u32 = 64
llama_model_loader: - kv 7: command-r.attention.head_count_kv u32 = 64
llama_model_loader: - kv 8: command-r.rope.freq_base f32 = 8000000.000000
llama_model_loader: - kv 9: command-r.attention.layer_norm_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: command-r.logit_scale f32 = 0.062500
llama_model_loader: - kv 12: command-r.rope.scaling.type str = none
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,256000] = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,253333] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 5
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 255001
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 41 tensors
llama_model_loader: - type q4_0: 280 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
llm_load_vocab:
llm_load_vocab: special tokens definition check successful ( 1008/256000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = command-r
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 256000
llm_load_print_meta: n_merges = 253333
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 64
llm_load_print_meta: n_layer = 40
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 8192
llm_load_print_meta: n_embd_v_gqa = 8192
llm_load_print_meta: f_norm_eps = 1.0e-05
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 6.2e-02
llm_load_print_meta: n_ff = 22528
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = none
llm_load_print_meta: freq_base_train = 8000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 35B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 34.98 B
llm_load_print_meta: model size = 18.83 GiB (4.62 BPW)
llm_load_print_meta: general.name = c4ai-command-r-v01
llm_load_print_meta: BOS token = 5 '<BOS_TOKEN>'
llm_load_print_meta: EOS token = 255001 '<|END_OF_TURN_TOKEN|>'
llm_load_print_meta: PAD token = 0 '<PAD>'
llm_load_print_meta: LF token = 136 'Ä'
llm_load_tensors: ggml ctx size = 0.34 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 19281.92 MiB, (19282.00 / 27648.00)
llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 41/41 layers to GPU
llm_load_tensors: CPU buffer size = 1640.62 MiB
llm_load_tensors: Metal buffer size = 19281.91 MiB
.......................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 8000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Pro
ggml_metal_init: picking default device: Apple M3 Pro
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name: Apple M3 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 28991.03 MB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 2560.00 MiB, (21847.88 / 27648.00)
llama_kv_cache_init: Metal KV buffer size = 2560.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CPU output buffer size = 1.01 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 516.00 MiB, (22363.88 / 27648.00)
llama_new_context_with_model: Metal compute buffer size = 516.00 MiB
llama_new_context_with_model: CPU compute buffer size = 20.01 MiB
llama_new_context_with_model: graph nodes = 1208
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"0x1f56dbac0","timestamp":1714741287}
{"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"0x1f56dbac0","timestamp":1714741287}
{"function":"main","level":"INFO","line":3067,"msg":"model loaded","tid":"0x1f56dbac0","timestamp":1714741287}
{"function":"main","hostname":"127.0.0.1","level":"INFO","line":3270,"msg":"HTTP server listening","n_threads_http":"11","port":"62223","tid":"0x1f56dbac0","timestamp":1714741287}
{"function":"update_slots","level":"INFO","line":1581,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"0x1f56dbac0","timestamp":1714741287}
```
Calling python code essentially distills down to:
```python
response = ollama.generate('command-r', system=system, prompt=prompt, keep_alive='1m', stream=False, raw=False)['response']
```
I think the fix will be re-converting and re-quantizing all of these models, which is what the folks in llama.cpp-world are doing now.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.33
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4126/reactions",
"total_count": 16,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4126/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5510
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5510/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5510/comments
|
https://api.github.com/repos/ollama/ollama/issues/5510/events
|
https://github.com/ollama/ollama/pull/5510
| 2,393,248,791
|
PR_kwDOJ0Z1Ps50koYp
| 5,510
|
cmd: display transfer model data progress
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-05T23:58:52
| 2024-07-31T17:16:37
| 2024-07-31T17:16:37
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5510",
"html_url": "https://github.com/ollama/ollama/pull/5510",
"diff_url": "https://github.com/ollama/ollama/pull/5510.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5510.patch",
"merged_at": null
}
|
displays `transferring model data 24% ⠇" during transfer data
rebased on top of https://github.com/ollama/ollama/pull/5441
https://github.com/ollama/ollama/issues/5423
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5510/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2432
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2432/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2432/comments
|
https://api.github.com/repos/ollama/ollama/issues/2432/events
|
https://github.com/ollama/ollama/pull/2432
| 2,127,739,346
|
PR_kwDOJ0Z1Ps5mg3BH
| 2,432
|
Snap packaging
|
{
"login": "mz2",
"id": 71363,
"node_id": "MDQ6VXNlcjcxMzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mz2",
"html_url": "https://github.com/mz2",
"followers_url": "https://api.github.com/users/mz2/followers",
"following_url": "https://api.github.com/users/mz2/following{/other_user}",
"gists_url": "https://api.github.com/users/mz2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mz2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mz2/subscriptions",
"organizations_url": "https://api.github.com/users/mz2/orgs",
"repos_url": "https://api.github.com/users/mz2/repos",
"events_url": "https://api.github.com/users/mz2/events{/privacy}",
"received_events_url": "https://api.github.com/users/mz2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-02-09T19:33:11
| 2024-11-21T08:14:40
| 2024-11-21T08:05:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2432",
"html_url": "https://github.com/ollama/ollama/pull/2432",
"diff_url": "https://github.com/ollama/ollama/pull/2432.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2432.patch",
"merged_at": null
}
|
Adds strictly confined snap packaging for x86-64 (~~and arm64~~ just x86-64 for starters, looks like this needs overall a bit of love in `ollama`), presently published on the channel `latest/beta`. This is a nice alternative to docker (no need to install and configure the nvidia docker runtime for example, systemd service is set up automatically, over-the-air updates, straightforward to access resources and data from user's host system within the limits of the application's confinement) and safer than bare installation onto host system with the shell script that some users might not want to go ahead with (strict confinement ~= containerised analogously to docker from the host system).
Installable with:
```bash
sudo snap install ollama --channel latest/beta
```
- strict confinement used with [`network`](https://snapcraft.io/docs/network-interface), [`network-bind`](https://snapcraft.io/docs/network-bind-interface), [`home`](https://snapcraft.io/docs/home-interface), [`removable-media`](https://snapcraft.io/docs/removable-media-interface), [`opengl`](https://snapcraft.io/docs/opengl-interface) interfaces in use, i.e. it can access and serve a port, access home directory and `/media`, and access the GPU (the `opengl` interface also grants access to CUDA etc).
- starts up a systemd service automatically with `ollama serve`.
- if removable media access is needed (e.g. user prefers storing models under a disk mounted under `/media`), `sudo snap connect ollama:removable-media` (for security reasons, removable media access not granted without user action).
If this looks interesting, I'm happy to hand over the package on snapcraft.io to an ollama maintainer, and can contribute CI integration to make it easy to keep the snap package up to date whenever you release.
If you want to build this locally, [after installing `snapcraft` and either the multipass or LXD provider for it](https://snapcraft.io/docs/snapcraft-setup) go to the root directory of the repository, and ...:
```bash
snapcraft
```
## Configuration
- **host** configurable in style `sudo snap set ollama host=0.0.0.0:12345` (changing the config value will automatically restart the systemd service)
- **models** directory configurable in style `sudo snap set ollama models=/your/preferred/path/to/your/models` (changing the config value will automatically restart the service)
- when calling `ollama` from the shell, automatically calls it with `OLLAMA_HOST` and `OLLAMA_MODELS` set based on above configuration (i.e. no need for setting these in `bashrc` etc).
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2432/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7629
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7629/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7629/comments
|
https://api.github.com/repos/ollama/ollama/issues/7629/events
|
https://github.com/ollama/ollama/issues/7629
| 2,651,860,893
|
I_kwDOJ0Z1Ps6eEC-d
| 7,629
|
Ollama not Utilizing Maximum available VRAM
|
{
"login": "ahmedashraf443",
"id": 26746937,
"node_id": "MDQ6VXNlcjI2NzQ2OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/26746937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmedashraf443",
"html_url": "https://github.com/ahmedashraf443",
"followers_url": "https://api.github.com/users/ahmedashraf443/followers",
"following_url": "https://api.github.com/users/ahmedashraf443/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmedashraf443/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmedashraf443/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmedashraf443/subscriptions",
"organizations_url": "https://api.github.com/users/ahmedashraf443/orgs",
"repos_url": "https://api.github.com/users/ahmedashraf443/repos",
"events_url": "https://api.github.com/users/ahmedashraf443/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmedashraf443/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 7
| 2024-11-12T11:12:49
| 2024-12-02T15:24:24
| 2024-12-02T15:24:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've been using llama.cpp recently to run large models, some of which exceed my GPU's VRAM capacity. With llama.cpp, when I run models that are too large to fully fit in VRAM, it manages to utilize around 7.2 to 7.5 GB of my 8 GB VRAM, offloading the remainder to system RAM. This approach maximizes GPU usage and improves performance.
However, when running the same model with Ollama, it only uses about 6 GB of VRAM, leaving a significant portion of my GPU memory unused. This reduced VRAM utilization seems to impact token generation speed.
Is there a way to configure Ollama to use the maximum available VRAM? My goal is to increase VRAM usage to improve token generation per minute.
I'm using a laptop with the following specs
RTX -2070 8GB Vram
I7-9750H
32 GB DDR5 Ram
SSD Gen 4
Thank you!
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.34
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7629/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7629/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/15
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/15/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/15/comments
|
https://api.github.com/repos/ollama/ollama/issues/15/events
|
https://github.com/ollama/ollama/pull/15
| 1,779,805,255
|
PR_kwDOJ0Z1Ps5UMC1L
| 15
|
batch model
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-06-28T21:34:34
| 2023-06-29T00:10:43
| 2023-06-29T00:10:39
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/15",
"html_url": "https://github.com/ollama/ollama/pull/15",
"diff_url": "https://github.com/ollama/ollama/pull/15.diff",
"patch_url": "https://github.com/ollama/ollama/pull/15.patch",
"merged_at": "2023-06-29T00:10:39"
}
|
add a batch model which is distinct in the way the prompts are displayed to the user. this produces a cleaner output without a trailing `>>>`
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/15/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/15/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8054
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8054/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8054/comments
|
https://api.github.com/repos/ollama/ollama/issues/8054/events
|
https://github.com/ollama/ollama/pull/8054
| 2,734,043,292
|
PR_kwDOJ0Z1Ps6E6u8Z
| 8,054
|
ci: fix linux version
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-11T21:14:25
| 2024-12-11T22:10:00
| 2024-12-11T22:09:57
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8054",
"html_url": "https://github.com/ollama/ollama/pull/8054",
"diff_url": "https://github.com/ollama/ollama/pull/8054.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8054.patch",
"merged_at": "2024-12-11T22:09:57"
}
|
Pass through the version override so the makefiles use it
rc3 linux binaries are reporting a version string of `0.5.2-rc3-0-g581a4a5-dirty`
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8054/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4427
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4427/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4427/comments
|
https://api.github.com/repos/ollama/ollama/issues/4427/events
|
https://github.com/ollama/ollama/issues/4427
| 2,294,983,591
|
I_kwDOJ0Z1Ps6Iyqun
| 4,427
|
ollama can't run qwen:72b, error msg ""gpu VRAM usage didn't recover within timeout
|
{
"login": "changingshow",
"id": 7709440,
"node_id": "MDQ6VXNlcjc3MDk0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7709440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/changingshow",
"html_url": "https://github.com/changingshow",
"followers_url": "https://api.github.com/users/changingshow/followers",
"following_url": "https://api.github.com/users/changingshow/following{/other_user}",
"gists_url": "https://api.github.com/users/changingshow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/changingshow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/changingshow/subscriptions",
"organizations_url": "https://api.github.com/users/changingshow/orgs",
"repos_url": "https://api.github.com/users/changingshow/repos",
"events_url": "https://api.github.com/users/changingshow/events{/privacy}",
"received_events_url": "https://api.github.com/users/changingshow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 20
| 2024-05-14T09:50:54
| 2024-11-20T10:36:33
| 2024-05-21T22:30:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have already downloaded qwen:7b, but when i run `ollama run qwen:7b`,got this error `Error: timed out waiting for llama runner to start:`, in the server.log have this msg `gpu VRAM usage didn't recover within timeout`
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.1.37
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4427/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4427/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/276
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/276/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/276/comments
|
https://api.github.com/repos/ollama/ollama/issues/276/events
|
https://github.com/ollama/ollama/pull/276
| 1,836,094,664
|
PR_kwDOJ0Z1Ps5XKdQu
| 276
|
configurable rope frequency parameters
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-08-04T05:34:23
| 2023-08-07T20:39:39
| 2023-08-07T20:39:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/276",
"html_url": "https://github.com/ollama/ollama/pull/276",
"diff_url": "https://github.com/ollama/ollama/pull/276.diff",
"patch_url": "https://github.com/ollama/ollama/pull/276.patch",
"merged_at": "2023-08-07T20:39:38"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/276/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4240
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4240/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4240/comments
|
https://api.github.com/repos/ollama/ollama/issues/4240/events
|
https://github.com/ollama/ollama/pull/4240
| 2,284,381,644
|
PR_kwDOJ0Z1Ps5u0SU0
| 4,240
|
reference license, template, system as files
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-07T23:21:22
| 2025-01-29T19:20:34
| 2025-01-29T19:20:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4240",
"html_url": "https://github.com/ollama/ollama/pull/4240",
"diff_url": "https://github.com/ollama/ollama/pull/4240.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4240.patch",
"merged_at": null
}
|
this change allows certain layers to take files as values. the final value for the layer is the content of the file
```
FROM llama3
LICENSE ./meta-llama/Meta-Llama-3-8B-Instruct/LICENSE
LICENSE ./meta-llama/Meta-Llama-3-8B-Instruct/USE_POLICY.md
```
any value that does not reference a file will use the original value
```
FROM foo
TEMPLATE {{ .System }} {{ .Prompt }}
```
the file value does not support wildcards or globs
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4240/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4240/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7115
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7115/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7115/comments
|
https://api.github.com/repos/ollama/ollama/issues/7115/events
|
https://github.com/ollama/ollama/pull/7115
| 2,570,168,454
|
PR_kwDOJ0Z1Ps59zZXy
| 7,115
|
Test
|
{
"login": "kavita-rane2",
"id": 175689274,
"node_id": "U_kgDOCnjOOg",
"avatar_url": "https://avatars.githubusercontent.com/u/175689274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kavita-rane2",
"html_url": "https://github.com/kavita-rane2",
"followers_url": "https://api.github.com/users/kavita-rane2/followers",
"following_url": "https://api.github.com/users/kavita-rane2/following{/other_user}",
"gists_url": "https://api.github.com/users/kavita-rane2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kavita-rane2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kavita-rane2/subscriptions",
"organizations_url": "https://api.github.com/users/kavita-rane2/orgs",
"repos_url": "https://api.github.com/users/kavita-rane2/repos",
"events_url": "https://api.github.com/users/kavita-rane2/events{/privacy}",
"received_events_url": "https://api.github.com/users/kavita-rane2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-10-07T11:32:38
| 2024-10-08T04:15:25
| 2024-10-07T11:32:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7115",
"html_url": "https://github.com/ollama/ollama/pull/7115",
"diff_url": "https://github.com/ollama/ollama/pull/7115.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7115.patch",
"merged_at": null
}
| null |
{
"login": "kavita-rane2",
"id": 175689274,
"node_id": "U_kgDOCnjOOg",
"avatar_url": "https://avatars.githubusercontent.com/u/175689274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kavita-rane2",
"html_url": "https://github.com/kavita-rane2",
"followers_url": "https://api.github.com/users/kavita-rane2/followers",
"following_url": "https://api.github.com/users/kavita-rane2/following{/other_user}",
"gists_url": "https://api.github.com/users/kavita-rane2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kavita-rane2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kavita-rane2/subscriptions",
"organizations_url": "https://api.github.com/users/kavita-rane2/orgs",
"repos_url": "https://api.github.com/users/kavita-rane2/repos",
"events_url": "https://api.github.com/users/kavita-rane2/events{/privacy}",
"received_events_url": "https://api.github.com/users/kavita-rane2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7115/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7267
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7267/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7267/comments
|
https://api.github.com/repos/ollama/ollama/issues/7267/events
|
https://github.com/ollama/ollama/issues/7267
| 2,598,879,242
|
I_kwDOJ0Z1Ps6a58AK
| 7,267
|
Running out of memory when allocating to second GPU
|
{
"login": "joshuakoh1",
"id": 40602863,
"node_id": "MDQ6VXNlcjQwNjAyODYz",
"avatar_url": "https://avatars.githubusercontent.com/u/40602863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshuakoh1",
"html_url": "https://github.com/joshuakoh1",
"followers_url": "https://api.github.com/users/joshuakoh1/followers",
"following_url": "https://api.github.com/users/joshuakoh1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshuakoh1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshuakoh1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshuakoh1/subscriptions",
"organizations_url": "https://api.github.com/users/joshuakoh1/orgs",
"repos_url": "https://api.github.com/users/joshuakoh1/repos",
"events_url": "https://api.github.com/users/joshuakoh1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshuakoh1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 5
| 2024-10-19T07:58:59
| 2024-10-22T09:32:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
No issues with any model that fits into a single 3090 but seems to run out of memory when trying to distribute to the second 3090.
```
INFO [wmain] starting c++ runner | tid="33768" timestamp=1729324300
INFO [wmain] build info | build=3670 commit="aad7f071" tid="33768" timestamp=1729324300
INFO [wmain] system info | n_threads=20 n_threads_batch=20 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="33768" timestamp=1729324300 total_threads=28
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="27" port="56651" tid="33768" timestamp=1729324300
llama_model_loader: loaded meta data with 41 key-value pairs and 724 tensors from C:\Users\Joshua\.ollama\models\blobs\sha256-001c9aacecbdca348f7c7c6d2b1a4120d447bf023afcacb3b864df023f1e2be4 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Llama 3.1 70B Instruct
llama_model_loader: - kv 3: general.organization str = Meta Llama
llama_model_loader: - kv 4: general.finetune str = Instruct
llama_model_loader: - kv 5: general.basename str = Llama-3.1
llama_model_loader: - kv 6: general.size_label str = 70B
llama_model_loader: - kv 7: general.license str = llama3.1
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Llama 3.1 70B Instruct
llama_model_loader: - kv 10: general.base_model.0.organization str = Meta Llama
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv 12: general.tags arr[str,3] = ["nvidia", "llama3.1", "text-generati...
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: general.datasets arr[str,1] = ["nvidia/HelpSteer2"]
llama_model_loader: - kv 15: llama.block_count u32 = 80
llama_model_loader: - kv 16: llama.context_length u32 = 131072
llama_model_loader: - kv 17: llama.embedding_length u32 = 8192
llama_model_loader: - kv 18: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 19: llama.attention.head_count u32 = 64
llama_model_loader: - kv 20: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 21: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 22: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 23: llama.attention.key_length u32 = 128
llama_model_loader: - kv 24: llama.attention.value_length u32 = 128
llama_model_loader: - kv 25: general.file_type u32 = 13
llama_model_loader: - kv 26: llama.vocab_size u32 = 128256
llama_model_loader: - kv 27: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 28: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 29: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 32: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 33: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 34: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 35: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 36: general.quantization_version u32 = 2
llama_model_loader: - kv 37: quantize.imatrix.file str = /models_out/Llama-3.1-Nemotron-70B-In...
llama_model_loader: - kv 38: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
llama_model_loader: - kv 39: quantize.imatrix.entries_count i32 = 560
llama_model_loader: - kv 40: quantize.imatrix.chunks_count i32 = 125
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q3_K: 321 tensors
llama_model_loader: - type q5_K: 240 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-10-19T15:51:40.427+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q3_K - Large
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 34.58 GiB (4.21 BPW)
llm_load_print_meta: general.name = Llama 3.1 70B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 1.02 MiB
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 430.55 MiB
llm_load_tensors: CUDA0 buffer size = 17507.01 MiB
llm_load_tensors: CUDA1 buffer size = 17474.99 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1312.00 MiB on device 0: cudaMalloc failed: out of memory
llama_kv_cache_init: failed to allocate buffer for kv cache
llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache
llama_init_from_gpt_params: error: failed to create context with model 'C:\Users\Joshua\.ollama\models\blobs\sha256-001c9aacecbdca348f7c7c6d2b1a4120d447bf023afcacb3b864df023f1e2be4'
ERROR [load_model] unable to load model | model="C:\\Users\\Joshua\\.ollama\\models\\blobs\\sha256-001c9aacecbdca348f7c7c6d2b1a4120d447bf023afcacb3b864df023f1e2be4" tid="33768" timestamp=1729324312
time=2024-10-19T15:51:53.175+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
time=2024-10-19T15:51:55.231+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2024-10-19T15:51:55.734+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error:failed to create context with model 'C:\\Users\\Joshua\\.ollama\\models\\blobs\\sha256-001c9aacecbdca348f7c7c6d2b1a4120d447bf023afcacb3b864df023f1e2be4'"
[GIN] 2024/10/19 - 15:51:55 | 500 | 15.6142405s | 127.0.0.1 | POST "/api/generate"
```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.13
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7267/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8507
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8507/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8507/comments
|
https://api.github.com/repos/ollama/ollama/issues/8507/events
|
https://github.com/ollama/ollama/pull/8507
| 2,800,284,068
|
PR_kwDOJ0Z1Ps6IaRxH
| 8,507
|
Add Nvidia Model
|
{
"login": "Setland34",
"id": 105908636,
"node_id": "U_kgDOBlAJnA",
"avatar_url": "https://avatars.githubusercontent.com/u/105908636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Setland34",
"html_url": "https://github.com/Setland34",
"followers_url": "https://api.github.com/users/Setland34/followers",
"following_url": "https://api.github.com/users/Setland34/following{/other_user}",
"gists_url": "https://api.github.com/users/Setland34/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Setland34/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Setland34/subscriptions",
"organizations_url": "https://api.github.com/users/Setland34/orgs",
"repos_url": "https://api.github.com/users/Setland34/repos",
"events_url": "https://api.github.com/users/Setland34/events{/privacy}",
"received_events_url": "https://api.github.com/users/Setland34/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-20T21:18:44
| 2025-01-20T21:18:46
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8507",
"html_url": "https://github.com/ollama/ollama/pull/8507",
"diff_url": "https://github.com/ollama/ollama/pull/8507.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8507.patch",
"merged_at": null
}
|
Fixes #8460
Add support for Nvidia Model in the repository.
* **api/types.go**: Add a new struct `NvidiaModel` and a constant `NvidiaModelURL`.
* **convert/convert_llama.go**: Add a new function `convertNvidiaModel` to handle the Nvidia Model conversion. Add a new struct `NvidiaModel` and a constant `NvidiaModelURL`.
* **convert/convert.go**: Add a new case for `NvidiaModel` in the main conversion function.
* **convert/testdata/Nvidia-Model.json**: Add test data for the Nvidia Model.
* **server/model.go**: Add a new function `handleNvidiaModel` to process the Nvidia Model.
---
For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/ollama/ollama/pull/8507?shareId=41209112-8377-4ae8-a2de-aa50536b27be).
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8507/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3272
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3272/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3272/comments
|
https://api.github.com/repos/ollama/ollama/issues/3272/events
|
https://github.com/ollama/ollama/issues/3272
| 2,197,785,908
|
I_kwDOJ0Z1Ps6C_400
| 3,272
|
Error: exception create_tensor: tensor 'output.weight' not found
|
{
"login": "GhadaJouini",
"id": 32711189,
"node_id": "MDQ6VXNlcjMyNzExMTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/32711189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GhadaJouini",
"html_url": "https://github.com/GhadaJouini",
"followers_url": "https://api.github.com/users/GhadaJouini/followers",
"following_url": "https://api.github.com/users/GhadaJouini/following{/other_user}",
"gists_url": "https://api.github.com/users/GhadaJouini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GhadaJouini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GhadaJouini/subscriptions",
"organizations_url": "https://api.github.com/users/GhadaJouini/orgs",
"repos_url": "https://api.github.com/users/GhadaJouini/repos",
"events_url": "https://api.github.com/users/GhadaJouini/events{/privacy}",
"received_events_url": "https://api.github.com/users/GhadaJouini/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-20T15:10:35
| 2024-10-29T07:31:51
| 2024-04-15T19:46:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm trying to add a custom model in ollama but am encountering this error. Is it possible that the model quantization is not correct ?
**Error: exception create_tensor: tensor 'output.weight' not found**
Quantized model: https://huggingface.co/MaziyarPanahi/zephyr-7b-gemma-v0.1-GGUF
I’m under ollama version : **0.1.29**
**Os distribution:**
PRETTY_NAME="Ubuntu 22.04.4 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.4 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian


### What did you expect to see?
The model should run correctly
### Steps to reproduce
Step 1: installing git lfs
Step2 : Using the [HuggingFaceModelDownloader](https://github.com/bodaay/HuggingFaceModelDownloader) to download the model from huggingFace
Step 3: Create the Modelfile
Step 4: Create the model using the command: **ollama create "zephyr-7b-gemma-v0.1.Q6_0" -f Modelfile**
Step 5: Run the model using the command: **ollama run zephyr-7b-gemma-v0.1.Q6_0:latest**
### Are there any recent changes that introduced the issue?
Modelfile:
FROM ./zephyr-7b-gemma-v0.1.Q6_K.gguf
TEMPLATE "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
No GPU
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3272/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3272/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6435
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6435/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6435/comments
|
https://api.github.com/repos/ollama/ollama/issues/6435/events
|
https://github.com/ollama/ollama/issues/6435
| 2,474,771,281
|
I_kwDOJ0Z1Ps6TggNR
| 6,435
|
0.3.6 /api/embed return 500 if more items are provided in input
|
{
"login": "davidliudev",
"id": 31893484,
"node_id": "MDQ6VXNlcjMxODkzNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/31893484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidliudev",
"html_url": "https://github.com/davidliudev",
"followers_url": "https://api.github.com/users/davidliudev/followers",
"following_url": "https://api.github.com/users/davidliudev/following{/other_user}",
"gists_url": "https://api.github.com/users/davidliudev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidliudev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidliudev/subscriptions",
"organizations_url": "https://api.github.com/users/davidliudev/orgs",
"repos_url": "https://api.github.com/users/davidliudev/repos",
"events_url": "https://api.github.com/users/davidliudev/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidliudev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-08-20T05:49:18
| 2024-08-22T21:51:44
| 2024-08-22T21:51:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This does not happen on 0.3.4. It only happen on the latest 0.3.6 (Haven't test on 0.3.5).
If I reduce the payload to a single item it is all good. It happens when I put more items.
I have to downgrade to the 0.3.4 until this is fixed.
Here is the log:
time=2024-08-20T13:38:29.761+08:00 level=ERROR source=routes.go:394 msg="embedding generation failed" error="health resp: Get \"http://127.0.0.1:56157/health\": read tcp 127.0.0.1:56190->127.0.0.1:56157: wsarecv: An existing connection was forcibly closed by the remote host."
Sample payload:
```
{
"model" : "nomic-embed-text",
"input" : [ "BREAKFAST", "MAPLE LEAVES", "SCENERY", "ROKKO MOUNTAIN", "SUNSET", "HOT SPRINGS", "PEAK", "NARITA AIRPORT", "SPEAKER", "MORNING", "MOUNTAIN", "RED-EYE FLIGHT", "STRING QUARTET", "AIR", "AUTUMN", "CHERRY BLOSSOM", "LAKE KAWAGUCHI", "SUBWAY", "ARAKURAYAMA SENGEN PARK", "MUSIC", "PARKS", "SCARY", "SYMPHONIES", "TEMPERATURE", "CHUREITO PAGODA", "HAKONE", "LAKE", "PINOCCHIO", "3 DEGREES", "THREE DEGREES CELSIUS", "NARRATOR", "SINGAPORE", "MUSIC BOX MUSEUM", "PIANO", "RING", "SAND ART", "FIVE-STORIED PAGODA", "HEAVENLY BELL", "KAWAGUCHIKO STATION", "MOUNT FUJI", "MOUNT FUJI AREA", "PIANOS", "SHINJUKU", "TENJOYAMA PARK", "BENCHES", "BUS", "HOTEL", "JAPAN", "TRAIL", "SWANS", "APRIL 3RD", "KANSAI", "LOCAL GIRL", "SHINJUJU", "NECKLACE", "PINOCCHIO'S STORY" ]
}
```
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.6
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6435/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6752
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6752/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6752/comments
|
https://api.github.com/repos/ollama/ollama/issues/6752/events
|
https://github.com/ollama/ollama/pull/6752
| 2,519,544,747
|
PR_kwDOJ0Z1Ps57JteD
| 6,752
|
Update README.md
|
{
"login": "rapidarchitect",
"id": 126218667,
"node_id": "U_kgDOB4Xxqw",
"avatar_url": "https://avatars.githubusercontent.com/u/126218667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rapidarchitect",
"html_url": "https://github.com/rapidarchitect",
"followers_url": "https://api.github.com/users/rapidarchitect/followers",
"following_url": "https://api.github.com/users/rapidarchitect/following{/other_user}",
"gists_url": "https://api.github.com/users/rapidarchitect/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rapidarchitect/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rapidarchitect/subscriptions",
"organizations_url": "https://api.github.com/users/rapidarchitect/orgs",
"repos_url": "https://api.github.com/users/rapidarchitect/repos",
"events_url": "https://api.github.com/users/rapidarchitect/events{/privacy}",
"received_events_url": "https://api.github.com/users/rapidarchitect/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-11T11:56:49
| 2024-09-12T01:36:26
| 2024-09-12T01:36:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6752",
"html_url": "https://github.com/ollama/ollama/pull/6752",
"diff_url": "https://github.com/ollama/ollama/pull/6752.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6752.patch",
"merged_at": "2024-09-12T01:36:26"
}
|
Added Ollama Mixture of Experts repository to terminal apps.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6752/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.