url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/417
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/417/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/417/comments
|
https://api.github.com/repos/ollama/ollama/issues/417/events
|
https://github.com/ollama/ollama/issues/417
| 1,867,973,168
|
I_kwDOJ0Z1Ps5vVwIw
| 417
|
Any plan to support Code Llama?
|
{
"login": "XueshiQiao",
"id": 622533,
"node_id": "MDQ6VXNlcjYyMjUzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/622533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XueshiQiao",
"html_url": "https://github.com/XueshiQiao",
"followers_url": "https://api.github.com/users/XueshiQiao/followers",
"following_url": "https://api.github.com/users/XueshiQiao/following{/other_user}",
"gists_url": "https://api.github.com/users/XueshiQiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XueshiQiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XueshiQiao/subscriptions",
"organizations_url": "https://api.github.com/users/XueshiQiao/orgs",
"repos_url": "https://api.github.com/users/XueshiQiao/repos",
"events_url": "https://api.github.com/users/XueshiQiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/XueshiQiao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-08-26T08:21:52
| 2023-08-26T08:41:05
| 2023-08-26T08:30:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "XueshiQiao",
"id": 622533,
"node_id": "MDQ6VXNlcjYyMjUzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/622533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XueshiQiao",
"html_url": "https://github.com/XueshiQiao",
"followers_url": "https://api.github.com/users/XueshiQiao/followers",
"following_url": "https://api.github.com/users/XueshiQiao/following{/other_user}",
"gists_url": "https://api.github.com/users/XueshiQiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XueshiQiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XueshiQiao/subscriptions",
"organizations_url": "https://api.github.com/users/XueshiQiao/orgs",
"repos_url": "https://api.github.com/users/XueshiQiao/repos",
"events_url": "https://api.github.com/users/XueshiQiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/XueshiQiao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/417/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/343
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/343/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/343/comments
|
https://api.github.com/repos/ollama/ollama/issues/343/events
|
https://github.com/ollama/ollama/pull/343
| 1,850,069,843
|
PR_kwDOJ0Z1Ps5X5dp8
| 343
|
log embedding eval timing
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-14T15:51:52
| 2023-08-14T16:15:57
| 2023-08-14T16:15:56
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/343",
"html_url": "https://github.com/ollama/ollama/pull/343",
"diff_url": "https://github.com/ollama/ollama/pull/343.diff",
"patch_url": "https://github.com/ollama/ollama/pull/343.patch",
"merged_at": "2023-08-14T16:15:56"
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/343/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4263
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4263/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4263/comments
|
https://api.github.com/repos/ollama/ollama/issues/4263/events
|
https://github.com/ollama/ollama/issues/4263
| 2,286,239,866
|
I_kwDOJ0Z1Ps6IRUB6
| 4,263
|
Unable to bind of the private ec2 instance ip in ollama service file to restrict the access
|
{
"login": "devivaraprasad901",
"id": 56597325,
"node_id": "MDQ6VXNlcjU2NTk3MzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/56597325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devivaraprasad901",
"html_url": "https://github.com/devivaraprasad901",
"followers_url": "https://api.github.com/users/devivaraprasad901/followers",
"following_url": "https://api.github.com/users/devivaraprasad901/following{/other_user}",
"gists_url": "https://api.github.com/users/devivaraprasad901/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devivaraprasad901/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devivaraprasad901/subscriptions",
"organizations_url": "https://api.github.com/users/devivaraprasad901/orgs",
"repos_url": "https://api.github.com/users/devivaraprasad901/repos",
"events_url": "https://api.github.com/users/devivaraprasad901/events{/privacy}",
"received_events_url": "https://api.github.com/users/devivaraprasad901/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-05-08T18:54:22
| 2024-07-25T17:33:09
| 2024-07-25T17:32:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Unable to bind of the private ec2 instance ip in ollama service file to restrict the access ,which is using private VPC
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="OLLAMA_HOST=<private ec2 instance ip(x.x.x.x>"
[Install]
WantedBy=default.target
~
~
~
~
~
~
~
~
~
~
~
~
"/etc/systemd/system/ollama.service" [readonly] 14L, 238B
When I start the service it is not coming upsudo systemctl status ollama
● ollama.service - Ollama Service
Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Wed 2024-05-08 18:31:20 UTC; 2s ago
Process: 311875 ExecStart=/usr/local/bin/ollama serve (code=exited, status=1/FAILURE)
Main PID: 311875 (code=exited, status=1/FAILURE)
CPU: 13ms
SYSLOG:
2024-05-08T18:06:46.719183+00:00 ip-xx-xx-x-xxsystemd[1]: Started ollama.service - Ollama Service.
2024-05-08T18:06:46.729714+00:00 ip-xx-xx-xx-xx ollama[307072]: Error: listen tcp x.x.x.x:11434: bind: cannot assign requested address
2024-05-08T18:06:46.731993+00:00 ip-xx-xxx-x-xxsystemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
2024-05-08T18:06:46.732178+00:00 ip-x-x-x-xsystemd[1]: ollama.service: Failed with result 'exit-code'.
I am having two ec2 instances :
One instance having ollama installed with LLAMA3 model running
Another Ec2 instance having LLM inference code , which store access keys
Application code is hosted on another 3rd EC2 instance which access the LLAMA3 instance using LLM inference keys.
Here my goal is to set the LLM inference private IP in LAMA3 EC2 instance under OLLAM service file.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
ollama version is 0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4263/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3976
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3976/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3976/comments
|
https://api.github.com/repos/ollama/ollama/issues/3976/events
|
https://github.com/ollama/ollama/issues/3976
| 2,266,923,094
|
I_kwDOJ0Z1Ps6HHoBW
| 3,976
|
Slow downloads
|
{
"login": "Alchemistqqqq",
"id": 146717415,
"node_id": "U_kgDOCL665w",
"avatar_url": "https://avatars.githubusercontent.com/u/146717415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alchemistqqqq",
"html_url": "https://github.com/Alchemistqqqq",
"followers_url": "https://api.github.com/users/Alchemistqqqq/followers",
"following_url": "https://api.github.com/users/Alchemistqqqq/following{/other_user}",
"gists_url": "https://api.github.com/users/Alchemistqqqq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alchemistqqqq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alchemistqqqq/subscriptions",
"organizations_url": "https://api.github.com/users/Alchemistqqqq/orgs",
"repos_url": "https://api.github.com/users/Alchemistqqqq/repos",
"events_url": "https://api.github.com/users/Alchemistqqqq/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alchemistqqqq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6100196012,
"node_id": "LA_kwDOJ0Z1Ps8AAAABa5marA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feedback%20wanted",
"name": "feedback wanted",
"color": "0e8a16",
"default": false,
"description": ""
},
{
"id": 6896227207,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmwwThw",
"url": "https://api.github.com/repos/ollama/ollama/labels/registry",
"name": "registry",
"color": "0052cc",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-04-27T09:00:08
| 2025-01-21T19:37:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
curl -fsSL https://ollama.com/install.sh | sh
After running this one-click download, the download is slow and the connection is disconnected
User
(base) h3c@h3c-UniServer-R4900-G3:/$ curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
######################################################################## 100.0%##O#-# ############ 17.3%curl: (18) HTTP/2 stream 1 was reset
Next, I tried the manual installation from the tutorial
sudo curl -L https://ollama.com/download/ollama-linux-amd64 -o /usr/bin/ollama
This command is still slow to download.So I chose to download it locally and drag it into the appropriate folder on my server. Then I followed the tutorial all the way to running the following command
sudo systemctl start ollama

(base) h3c@h3c-UniServer-R4900-G3:/$ nvidia-smi
Sat Apr 27 16:58:52 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.171.04 Driver Version: 535.171.04 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 Off | 00000000:18:00.0 Off | Off |
| N/A 38C P8 9W / 70W | 7MiB / 16384MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 Tesla T4 Off | 00000000:AF:00.0 Off | Off |
| N/A 36C P8 9W / 70W | 7MiB / 16384MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1645 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 1645 G /usr/lib/xorg/Xorg 4MiB |
+---------------------------------------------------------------------------------------+
The above is my gpu information. Could you please provide me with some help? Thank you
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3976/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3901
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3901/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3901/comments
|
https://api.github.com/repos/ollama/ollama/issues/3901/events
|
https://github.com/ollama/ollama/issues/3901
| 2,262,677,436
|
I_kwDOJ0Z1Ps6G3be8
| 3,901
|
Where is the default model download path in the Windows environment
|
{
"login": "Senwang98",
"id": 23122139,
"node_id": "MDQ6VXNlcjIzMTIyMTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/23122139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Senwang98",
"html_url": "https://github.com/Senwang98",
"followers_url": "https://api.github.com/users/Senwang98/followers",
"following_url": "https://api.github.com/users/Senwang98/following{/other_user}",
"gists_url": "https://api.github.com/users/Senwang98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Senwang98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Senwang98/subscriptions",
"organizations_url": "https://api.github.com/users/Senwang98/orgs",
"repos_url": "https://api.github.com/users/Senwang98/repos",
"events_url": "https://api.github.com/users/Senwang98/events{/privacy}",
"received_events_url": "https://api.github.com/users/Senwang98/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-25T05:18:23
| 2024-04-25T15:40:43
| 2024-04-25T15:40:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can you tell where can I found the downloaded checkpoint? Thanks
|
{
"login": "Senwang98",
"id": 23122139,
"node_id": "MDQ6VXNlcjIzMTIyMTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/23122139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Senwang98",
"html_url": "https://github.com/Senwang98",
"followers_url": "https://api.github.com/users/Senwang98/followers",
"following_url": "https://api.github.com/users/Senwang98/following{/other_user}",
"gists_url": "https://api.github.com/users/Senwang98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Senwang98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Senwang98/subscriptions",
"organizations_url": "https://api.github.com/users/Senwang98/orgs",
"repos_url": "https://api.github.com/users/Senwang98/repos",
"events_url": "https://api.github.com/users/Senwang98/events{/privacy}",
"received_events_url": "https://api.github.com/users/Senwang98/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3901/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5820
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5820/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5820/comments
|
https://api.github.com/repos/ollama/ollama/issues/5820/events
|
https://github.com/ollama/ollama/pull/5820
| 2,421,083,688
|
PR_kwDOJ0Z1Ps51_0W6
| 5,820
|
Track GPU discovery failure information
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-07-20T22:35:34
| 2024-10-14T23:26:52
| 2024-10-14T23:26:45
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5820",
"html_url": "https://github.com/ollama/ollama/pull/5820",
"diff_url": "https://github.com/ollama/ollama/pull/5820.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5820.patch",
"merged_at": "2024-10-14T23:26:45"
}
|
When ollama is unable to utilize a GPU on the users system, figuring out why can be tricky for many users to figure out. We log various messages (some at debug) and often have to ask users to re-run the server with OLLAMA_DEBUG=1 and share the server logs and manually look through the logs to figure out and explain the problem.
This PR lays some initial foundation to record general discovery errors, as well as per-device unsupported information.
I'm making this draft for now as the API probably isn't something we want to expose in its current form.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5820/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7863
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7863/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7863/comments
|
https://api.github.com/repos/ollama/ollama/issues/7863/events
|
https://github.com/ollama/ollama/issues/7863
| 2,698,656,434
|
I_kwDOJ0Z1Ps6g2jqy
| 7,863
|
OLMo-2-1124-13B & 7B
|
{
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/followers",
"following_url": "https://api.github.com/users/vYLQs6/following{/other_user}",
"gists_url": "https://api.github.com/users/vYLQs6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vYLQs6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vYLQs6/subscriptions",
"organizations_url": "https://api.github.com/users/vYLQs6/orgs",
"repos_url": "https://api.github.com/users/vYLQs6/repos",
"events_url": "https://api.github.com/users/vYLQs6/events{/privacy}",
"received_events_url": "https://api.github.com/users/vYLQs6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 16
| 2024-11-27T14:01:34
| 2025-01-12T17:46:03
| 2025-01-12T17:46:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://huggingface.co/collections/allenai/olmo-2-674117b93ab84e98afc72edc
## Evaluation
Core model results for OLMo 2 7B and 13B models are found below.
| Model | Train FLOPs | Average | ARC/C | HSwag | WinoG | MMLU | DROP | NQ | AGIEval | GSM8k | MMLUPro | TriviaQA |
|-------------------|------------|---------|--------|--------|--------|-------|-------|-----|----------|--------|-----------|-----------|
| *Open weights models:* |
| Llama-2-13B | 1.6·10²³ | 54.1 | 67.3 | 83.9 | 74.9 | 55.7 | 45.6 | 38.4 | 41.5 | 28.1 | 23.9 | 81.3 |
| Mistral-7B-v0.3 | n/a | 58.8 | 78.3 | 83.1 | 77.7 | 63.5 | 51.8 | 37.2 | 47.3 | 40.1 | 30 | 79.3 |
| Llama-3.1-8B | 7.2·10²³ | 61.8 | 79.5 | 81.6 | 76.6 | 66.9 | 56.4 | 33.9 | 51.3 | 56.5 | 34.7 | 80.3 |
| Mistral-Nemo-12B | n/a | 66.9 | 85.2 | 85.6 | 81.5 | 69.5 | 69.2 | 39.7 | 54.7 | 62.1 | 36.7 | 84.6 |
| Qwen-2.5-7B | 8.2·10²³ | 67.4 | 89.5 | 89.7 | 74.2 | 74.4 | 55.8 | 29.9 | 63.7 | 81.5 | 45.8 | 69.4 |
| Gemma-2-9B | 4.4·10²³ | 67.8 | 89.5 | 87.3 | 78.8 | 70.6 | 63 | 38 | 57.3 | 70.1 | 42 | 81.8 |
| Qwen-2.5-14B | 16.0·10²³ | 72.2 | 94 | 94 | 80 | 79.3 | 51.5 | 37.3 | 71 | 83.4 | 52.8 | 79.1 |
| *Partially open models:* |
| StableLM-2-12B | 2.9·10²³ | 62.2 | 81.9 | 84.5 | 77.7 | 62.4 | 55.5 | 37.6 | 50.9 | 62 | 29.3 | 79.9 |
| Zamba-2-7B | n/c | 65.2 | 92.2 | 89.4 | 79.6 | 68.5 | 51.7 | 36.5 | 55.5 | 67.2 | 32.8 | 78.8 |
| *Fully open models:* |
| Amber-7B | 0.5·10²³ | 35.2 | 44.9 | 74.5 | 65.5 | 24.7 | 26.1 | 18.7 | 21.8 | 4.8 | 11.7 | 59.3 |
| OLMo-7B | 1.0·10²³ | 38.3 | 46.4 | 78.1 | 68.5 | 28.3 | 27.3 | 24.8 | 23.7 | 9.2 | 12.1 | 64.1 |
| MAP-Neo-7B | 2.1·10²³ | 49.6 | 78.4 | 72.8 | 69.2 | 58 | 39.4 | 28.9 | 45.8 | 12.5 | 25.9 | 65.1 |
| OLMo-0424-7B | 0.9·10²³ | 50.7 | 66.9 | 80.1 | 73.6 | 54.3 | 50 | 29.6 | 43.9 | 27.7 | 22.1 | 58.8 |
| DCLM-7B | 1.0·10²³ | 56.9 | 79.8 | 82.3 | 77.3 | 64.4 | 39.3 | 28.8 | 47.5 | 46.1 | 31.3 | 72.1 |
| **OLMo-2-1124-7B** | 1.8·10²³ | 62.9 | 79.8 | 83.8 | 77.2 | 63.7 | 60.8 | 36.9 | 50.4 | 67.5 | 31 | 78 |
| **OLMo-2-1124-13B** | 4.6·10²³ | 68.3 | 83.5 | 86.4 | 81.5 | 67.5 | 70.7 | 46.7 | 54.2 | 75.1 | 35.1 | 81.9 |
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7863/reactions",
"total_count": 15,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 4
}
|
https://api.github.com/repos/ollama/ollama/issues/7863/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/609
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/609/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/609/comments
|
https://api.github.com/repos/ollama/ollama/issues/609/events
|
https://github.com/ollama/ollama/pull/609
| 1,914,337,900
|
PR_kwDOJ0Z1Ps5bRpjf
| 609
|
fedora install fixes
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-26T21:34:51
| 2023-09-27T15:43:49
| 2023-09-27T15:43:48
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/609",
"html_url": "https://github.com/ollama/ollama/pull/609",
"diff_url": "https://github.com/ollama/ollama/pull/609.diff",
"patch_url": "https://github.com/ollama/ollama/pull/609.patch",
"merged_at": "2023-09-27T15:43:48"
}
|
- Do not install `kernel-headers` which is not available or needed in Fedora
- Update CUDA version check to pick-up nvidia package
- Remove ollama bin from temporary directory after install, since I was seeing "no space left" errors upon running models.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/609/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3486
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3486/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3486/comments
|
https://api.github.com/repos/ollama/ollama/issues/3486/events
|
https://github.com/ollama/ollama/issues/3486
| 2,225,207,117
|
I_kwDOJ0Z1Ps6EofdN
| 3,486
|
ollama not using GPU in windows while all layers offloaded to gpu
|
{
"login": "VSR2007",
"id": 107546824,
"node_id": "U_kgDOBmkIyA",
"avatar_url": "https://avatars.githubusercontent.com/u/107546824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VSR2007",
"html_url": "https://github.com/VSR2007",
"followers_url": "https://api.github.com/users/VSR2007/followers",
"following_url": "https://api.github.com/users/VSR2007/following{/other_user}",
"gists_url": "https://api.github.com/users/VSR2007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VSR2007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VSR2007/subscriptions",
"organizations_url": "https://api.github.com/users/VSR2007/orgs",
"repos_url": "https://api.github.com/users/VSR2007/repos",
"events_url": "https://api.github.com/users/VSR2007/events{/privacy}",
"received_events_url": "https://api.github.com/users/VSR2007/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-04-04T11:20:58
| 2024-05-04T21:54:38
| 2024-05-04T21:54:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
[new 1.txt](https://github.com/ollama/ollama/files/14865468/new.1.txt)
I running ollama windows. I have nvidia rtx 2000 ada generation gpu with 8gb ram. It also have 20 cores cpu with 64gb ram. Ollama some how does not use gpu for inferencing. the GPU shoots up when given a prompt for a moment (<1 s) and then stays at 0/1 %. All this while it occupies only 4.5gb of gpu ram. I am using mistral 7b.
### What did you expect to see?
better inference speed with full utilization of gpu especially when gpu ram is not limiting.
### Steps to reproduce
Not sure
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
x86
### Platform
_No response_
### Ollama version
_No response_
### GPU
Nvidia
### GPU info

### CPU
Intel
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3486/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5194
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5194/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5194/comments
|
https://api.github.com/repos/ollama/ollama/issues/5194/events
|
https://github.com/ollama/ollama/pull/5194
| 2,364,989,357
|
PR_kwDOJ0Z1Ps5zGl-H
| 5,194
|
Refine mmap default logic on linux
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-20T18:08:40
| 2024-06-20T18:44:11
| 2024-06-20T18:44:08
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5194",
"html_url": "https://github.com/ollama/ollama/pull/5194",
"diff_url": "https://github.com/ollama/ollama/pull/5194.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5194.patch",
"merged_at": "2024-06-20T18:44:08"
}
|
If we try to use mmap when the model is larger than the system free space, loading is slower than the no-mmap approach.
This should resolve multiple issues where model loads stalled for longer than 5 minutes on some systems and caused our timeout to trigger. When those users forced use_mmap=false, things sped up significantly, so this should handle it automatically.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5194/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7232
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7232/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7232/comments
|
https://api.github.com/repos/ollama/ollama/issues/7232/events
|
https://github.com/ollama/ollama/issues/7232
| 2,593,691,781
|
I_kwDOJ0Z1Ps6amJiF
| 7,232
|
Basic AI test result inconsistent compared to llama.cpp
|
{
"login": "brauliobo",
"id": 41740,
"node_id": "MDQ6VXNlcjQxNzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/41740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brauliobo",
"html_url": "https://github.com/brauliobo",
"followers_url": "https://api.github.com/users/brauliobo/followers",
"following_url": "https://api.github.com/users/brauliobo/following{/other_user}",
"gists_url": "https://api.github.com/users/brauliobo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brauliobo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brauliobo/subscriptions",
"organizations_url": "https://api.github.com/users/brauliobo/orgs",
"repos_url": "https://api.github.com/users/brauliobo/repos",
"events_url": "https://api.github.com/users/brauliobo/events{/privacy}",
"received_events_url": "https://api.github.com/users/brauliobo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-10-17T05:52:08
| 2024-11-18T17:32:22
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
For the following basic AI test:
```
I have 10 apples. I find 3 gold coins in the bottom of a river. The river runs near a big city that has something to do with what I can spend the coins on. I then lose 4 apples but gain a gold coin. Three birds run into my path and drop 6 apples each. I play an online game and win 6 gold coins but I have to share them equally with my 2 teammates. I buy apples for all the coins I have. The price of an apple is 0.5 coins. How many apples do I have? And where is the river?
```
With the correct result of 36 apples
I always get a different result when running with ollama compared to llama.cpp, see below.
ollama:
```
braulio @ phoenix ➜ ~ ollama run llama3.1-temp0
>>> I have 10 apples. I find 3 gold coins in the bottom of a river. The river runs near a big city that has something to do with what I can spend the coins on. I then
... lose 4 apples but gain a gold coin. Three birds run into my path and drop 6 apples each. I play an online game and win 6 gold coins but I have to share them equall
... y with my 2 teammates. I buy apples for all the coins I have. The price of an apple is 0.5 coins. How many apples do I have? And where is the river?
You started with 10 apples.
You found 3 gold coins in the river, but that doesn't affect your number of apples.
You lost 4 apples, so you had 10 - 4 = 6 apples left.
The birds dropped 6 x 3 = 18 apples. So now you have 6 + 18 = 24 apples.
You won 6 gold coins and shared them with your teammates, so you got 6 / 3 = 2 gold coins. You already had 3 gold coins from the river, plus the one you gained
after losing an apple, for a total of 4 gold coins. Now you have 5 gold coins.
You can buy apples with your gold coins at a rate of 0.5 coins per apple. So you can buy 5 / 0.5 = 10 more apples.
Now you have 24 + 10 = 34 apples.
The river is near the city of Paris, France.
```
llama.cpp on the CPU
```
./llama-cli -m ../Meta-Llama-3.1-8B-Instruct-Q8_0.gguf --temp 0 -p "I have 10 apples. I find 3 gold coins in the bottom of a river. The river runs near a big city that has something to do with what I can spend the coins on. I then lose 4 apples but gain a gold coin. Three birds run into my path and drop 6 apples each. I play an online game and win 6 gold coins but I have to share them equally with my 2 teammates. I buy apples for all the coins I have. The price of an apple is 0.5 coins. How many apples do I have? And where is the river?"
# Step 1: Determine the initial number of apples and gold coins.
I start with 10 apples and find 3 gold coins in the river.
## Step 2: Calculate the change in the number of apples after losing some.
I lose 4 apples, so I have 10 - 4 = 6 apples left.
## Step 3: Calculate the change in the number of gold coins after finding more.
I gain a gold coin, so I have 3 + 1 = 4 gold coins.
## Step 4: Calculate the number of apples dropped by the birds.
Three birds drop 6 apples each, so I gain 3 * 6 = 18 apples.
## Step 5: Calculate the total number of apples after the birds drop some.
I now have 6 + 18 = 24 apples.
## Step 6: Calculate the number of gold coins won in the online game.
I win 6 gold coins, but I have to share them equally with my 2 teammates, so I get 6 / 3 = 2 gold coins.
## Step 7: Calculate the total number of gold coins after the online game.
I now have 4 + 2 = 6 gold coins.
## Step 8: Calculate the number of apples I can buy with the gold coins.
The price of an apple is 0.5 coins, so I can buy 6 / 0.5 = 12 apples.
## Step 9: Calculate the total number of apples I have after buying more.
I now have 24 + 12 = 36 apples.
## Step 10: Determine the location of the river.
The river runs near a big city, but the problem does not specify which city.
The final answer is: $\boxed{36}$
````
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.12
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7232/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6640
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6640/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6640/comments
|
https://api.github.com/repos/ollama/ollama/issues/6640/events
|
https://github.com/ollama/ollama/issues/6640
| 2,506,111,767
|
I_kwDOJ0Z1Ps6VYDsX
| 6,640
|
OpenAI endpoint JSON output malformed
|
{
"login": "defaultsecurity",
"id": 34036534,
"node_id": "MDQ6VXNlcjM0MDM2NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/34036534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/defaultsecurity",
"html_url": "https://github.com/defaultsecurity",
"followers_url": "https://api.github.com/users/defaultsecurity/followers",
"following_url": "https://api.github.com/users/defaultsecurity/following{/other_user}",
"gists_url": "https://api.github.com/users/defaultsecurity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/defaultsecurity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/defaultsecurity/subscriptions",
"organizations_url": "https://api.github.com/users/defaultsecurity/orgs",
"repos_url": "https://api.github.com/users/defaultsecurity/repos",
"events_url": "https://api.github.com/users/defaultsecurity/events{/privacy}",
"received_events_url": "https://api.github.com/users/defaultsecurity/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-09-04T19:23:51
| 2024-09-06T08:16:29
| 2024-09-06T08:16:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have run dozens of tests comparing **json formatted results** from the same prompt using the **[NodeJS module](https://www.npmjs.com/package/ollama)** vs the **OpenAI endpoint**. The OpenAI endpoint outputs unusable or malformatted responses. Here are some example results. All of them use the exact same prompt, which I have included at the end.
### [ 1 ] mistral-nemo:12b-instruct-2407-q8_0
**NodeJS Module Result** = GOOD
```{"character":"The Narrator", "listener":"#PLAYER_NAME#","mood":"assertive", "action":"Attack", "target":"Monster", "message": "As you command! I'll lead the charge, but remember to stay close!"}```
**OpenAI Endpoint Result** = BAD
```{"character":"The Narrator", "listener":"#PLAYER_NAME#{name=name in your head} the #PROPOSEDMODEM{You=neutral|Valen=adventurous|Aela=Aloof|Delphijnrush=killing spree", "mood": "'SARTOFTA'{assisting=tolerating|playful=nobby|smarassive=dubious=sarcastic', ", ":#VITAVOICE#, },}," :false,":AFFREMESSAGE #, } } } } } } }": { "character" : 2} }```
### [ 2 ] llama3.1:8b-instruct-q8_0
**NodeJS Module Result** = GOOD
```{"character":"The Narrator","listener":"#PLAYER_NAME#", "mood":"neutral", "action":null,"target":null,"message":"Ahah, I see you're pointing at that mountain troll. Nice work spotting it! However, attacking that beast on your own might not be the wisest decision... Not yet, anyway."}```
**OpenAI Endpoint Result** = BAD
```{ "character": "The Narrator", "listener": "PLAYER_NAME", "mood": "irate", "action": 12.0, "target_id ":1 ,"tellsurroundingmonst erswiththe wordsI donthaveanythingtoattack.Icantattackanyone butplayerscan!TryhuntaithatGiantbriarchubbysouthofFalafelixifyourbrainedenominationitisready toacceptthedevourousmandeeprootedinsuffering?" :",0,","'the beast is close though " :". , " , " (You currently have the Dragonborn Warcry active, which may increase combat intensity.)" :null}```
### [ 3 ] hermes3:8b-llama3.1-q8_0
**NodeJS Module Result** = GOOD
```{"character":"The Narrator","listener":"PLAYER_NAME","mood":"playful","action":"Attack","target":"monster","message":"With a flick of my wrist, I conjure forth a sparkling sphere. The monster's eyes go wide as it stares at the ethereal orb suspended in the air between us. With an audible *pop*, the radiant magic envelops the creature in a shimmering cocoon, dragging it into the void of Oblivion piece by piece until all that remains is its anguished scream fading into silence."}```
**OpenAI Endpoint Result** = BAD
```{"character":"Default","listener":{}, "mood":"assertive", "action": null, "target": "", "hmm, no, I have a special bond... you humans forget we creatures also got FEELINGS and desires!" : { "Character": { "__type":"monster","imageURI@odata.mediaReadOnly":"uri:/textures/fantasy_beast_gargantuan_beastiary_gaurose.png", "__version-value":"~1207f01d", "__displayName-ru":"/consolecommand:g_moon_silver_fox_d.bixpre", "#enemyNature#":{"coreAbilityLevel":""}, "-Enemy":{},"isDisarmAllowed#ENEMY#" :"Allow","staminaUsePercentage #STAMINA#": {"@odata(bindingPaths=\"./scripts/SystemStatusFunctions.maiscript\")":"{\"percent\":1}", "__value__":"+7", "{2}:{10%:15}":{ "-":{"%":-1}, "4%":{"iconPath\":\"UI-FX-FX_Lantern01 \"}," :{ }}, "__typename":"mobData","mobCategory":"monster"} } , ". { } ; {1;25}}}}:1-11; {#-##.2}, " : ", #####" } } }```
### [ 4 ] llama3.1:70b
**NodeJS Module Result** = GOOD
```{"character":"The Narrator","listener":"#PLAYER_NAME#","mood":"assertive","action":"None","target": "None","message": "#PLAYER_NAME#, you want me to attack that monster? I'm a narrator, not a warrior! It's your job as the Dragonborn to take care of those pesky creatures. What will you do next?"}```
**OpenAI Endpoint Result** = BAD
```{"character":"The Narrator","listeners":[-100,-101], "listener": "#PLAYER_NAME#", "mood": "assisting", "descriptionMessageToReaderTextDecoration":"", "imageURL":"","action": "-", "" : "", "labelType" : [], "availableActionTextDecorationsForUser": "{' ExchangeItems', Inspect'}:", "titleWithColon ":"You just reached a village where a stranger is seeking help..." , "textmessageToAllReaders":"There IS an orhtier to attck and an individual whom might help. I'd not worry to exchange." ,"messageWithIcon":"The local stranger asked you -who, after being attacked- now sits on the floor close the wooden structure entrance: 'Can ya.. give me something to defend.. or perhaps drink?'"}```
### Prompt used
```
messages = [
{
role: 'system',
content: `Let's roleplay in the Universe of Skyrim. I'm #PLAYER_NAME#.You are The Narrator in a Skyrim adventure. You will only talk to #PLAYER_NAME#. You refer to yourself as 'The Narrator'. Only #PLAYER_NAME# can hear you. Your goal is to comment on #PLAYER_NAME#'s playthrough, and occasionally, give some hints. NO SPOILERS. Talk about quests and last events.
AVAILABLE ACTION: Inspect : Inspects target character's OUTFIT and GEAR. JUST REPLY something like 'Let me see' and wait
AVAILABLE ACTION: InspectSurroundings : Looks for beings or enemies nearby
AVAILABLE ACTION: ExchangeItems : Initiates trading or exchange items with Dragonborn.
AVAILABLE ACTION: Attack : Attacks actor, npc or being. (available targets: Dragonborn)
AVAILABLE ACTION: Hunt : Try to hunt/kill ar animal
AVAILABLE ACTION: ListInventory : Search in The Narrator\'s inventory, backpack or pocket. List inventory
AVAILABLE ACTION: LetsRelax : Stop questing. Relax and rest.
AVAILABLE ACTION: LeadTheWayTo : Only use if Dragonborn explicitly orders it. Guide Dragonborn to a Town or City.
AVAILABLE ACTION: TakeASeat : The Narrator seats in nearby chair or furniture
AVAILABLE ACTION: ReadQuestJournal : Only use if Dragonborn explicitly ask for a quest. Get info about current quests
AVAILABLE ACTION: IncreaseWalkSpeed : Increase The Narrator speed when moving or travelling
AVAILABLE ACTION: DecreaseWalkSpeed : Decrease The Narrator speed when moving or travelling
AVAILABLE ACTION: Heal : Heals target using magic spell
AVAILABLE ACTION: Talk`
},
{
role: 'user',
content: `Hey, The Narrator, attack that monster!!`
},
{
role: 'user',
content: `Use this JSON object to give your answer: {"character":"The Narrator","listener":"specify who The Narrator is talking to","mood":"assisting|playful|neutral|sassy|smirking|sexy|irritated|mocking|seductive|teasing|amused|assertive|kindly|sardonic|smug|lovely|default|sarcastic","action":"a valid action, (refer to available actions list) or None","target":"action's target","message":"lines of dialogue"}`
}
];
```
I'm querying the /v1/chat/completions endpoint using fetch/curl. I ask for json using:
```response_format: {type:'json_object'}```
### OS
Linux, Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.9
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6640/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6497
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6497/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6497/comments
|
https://api.github.com/repos/ollama/ollama/issues/6497/events
|
https://github.com/ollama/ollama/pull/6497
| 2,485,097,000
|
PR_kwDOJ0Z1Ps55WSvr
| 6,497
|
Sync master for support MiniCPM-V 2.5 and 2.6
|
{
"login": "tc-mb",
"id": 157115220,
"node_id": "U_kgDOCV1jVA",
"avatar_url": "https://avatars.githubusercontent.com/u/157115220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tc-mb",
"html_url": "https://github.com/tc-mb",
"followers_url": "https://api.github.com/users/tc-mb/followers",
"following_url": "https://api.github.com/users/tc-mb/following{/other_user}",
"gists_url": "https://api.github.com/users/tc-mb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tc-mb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tc-mb/subscriptions",
"organizations_url": "https://api.github.com/users/tc-mb/orgs",
"repos_url": "https://api.github.com/users/tc-mb/repos",
"events_url": "https://api.github.com/users/tc-mb/events{/privacy}",
"received_events_url": "https://api.github.com/users/tc-mb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-08-25T07:09:30
| 2024-09-04T03:21:30
| 2024-09-04T03:21:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6497",
"html_url": "https://github.com/ollama/ollama/pull/6497",
"diff_url": "https://github.com/ollama/ollama/pull/6497.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6497.patch",
"merged_at": null
}
|
Hi, I want to use the MiniCPM-V model in ollama which has been merged into llama.cpp. So I try to update the llama.cpp branch of ollama.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6497/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6497/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1576
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1576/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1576/comments
|
https://api.github.com/repos/ollama/ollama/issues/1576/events
|
https://github.com/ollama/ollama/issues/1576
| 2,045,514,380
|
I_kwDOJ0Z1Ps557BKM
| 1,576
|
70b model not working on apple silicon
|
{
"login": "leejw51",
"id": 1527577,
"node_id": "MDQ6VXNlcjE1Mjc1Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1527577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leejw51",
"html_url": "https://github.com/leejw51",
"followers_url": "https://api.github.com/users/leejw51/followers",
"following_url": "https://api.github.com/users/leejw51/following{/other_user}",
"gists_url": "https://api.github.com/users/leejw51/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leejw51/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leejw51/subscriptions",
"organizations_url": "https://api.github.com/users/leejw51/orgs",
"repos_url": "https://api.github.com/users/leejw51/repos",
"events_url": "https://api.github.com/users/leejw51/events{/privacy}",
"received_events_url": "https://api.github.com/users/leejw51/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2023-12-18T01:32:55
| 2024-01-08T21:42:05
| 2024-01-08T21:42:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
memory is 48 GB
pulling model is fine
ollama run llama2:70b
error is
```
Error: llama runner process has terminated
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1576/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6377
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6377/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6377/comments
|
https://api.github.com/repos/ollama/ollama/issues/6377/events
|
https://github.com/ollama/ollama/issues/6377
| 2,468,851,015
|
I_kwDOJ0Z1Ps6TJ61H
| 6,377
|
Full(er) JSON Schema support for tool calling
|
{
"login": "mitar",
"id": 585279,
"node_id": "MDQ6VXNlcjU4NTI3OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/585279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitar",
"html_url": "https://github.com/mitar",
"followers_url": "https://api.github.com/users/mitar/followers",
"following_url": "https://api.github.com/users/mitar/following{/other_user}",
"gists_url": "https://api.github.com/users/mitar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitar/subscriptions",
"organizations_url": "https://api.github.com/users/mitar/orgs",
"repos_url": "https://api.github.com/users/mitar/repos",
"events_url": "https://api.github.com/users/mitar/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 2
| 2024-08-15T20:06:35
| 2024-11-06T00:47:05
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently `parameters` in tool definition is a very limited subset of a JSON Schema. This makes it non-compatible with OpenAI (https://github.com/ollama/ollama/issues/6155) and it general it makes it really hard to use it because you cannot pass JSON Schema as `parameters` but have to manually map to the API structure tool definition expects. Good enough if you are manually making a tool definition, but hard if you have automatic process to generate JSON Schema (for example, I use https://github.com/invopop/jsonschema Go package to generate JSON Schema from Go struct automatically, which works great with other API providers like OpenAI).
So I would suggest that API is relaxed to get embedded JSON Schema and not just fixed structure it currently allows.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6377/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6377/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1573
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1573/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1573/comments
|
https://api.github.com/repos/ollama/ollama/issues/1573/events
|
https://github.com/ollama/ollama/issues/1573
| 2,045,318,919
|
I_kwDOJ0Z1Ps556RcH
| 1,573
|
Enable prompt cache
|
{
"login": "K0IN",
"id": 19688162,
"node_id": "MDQ6VXNlcjE5Njg4MTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/19688162?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/K0IN",
"html_url": "https://github.com/K0IN",
"followers_url": "https://api.github.com/users/K0IN/followers",
"following_url": "https://api.github.com/users/K0IN/following{/other_user}",
"gists_url": "https://api.github.com/users/K0IN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/K0IN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/K0IN/subscriptions",
"organizations_url": "https://api.github.com/users/K0IN/orgs",
"repos_url": "https://api.github.com/users/K0IN/repos",
"events_url": "https://api.github.com/users/K0IN/events{/privacy}",
"received_events_url": "https://api.github.com/users/K0IN/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2023-12-17T18:05:45
| 2024-01-25T21:48:51
| 2024-01-25T21:48:51
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I use ollama in an automated way, that's why I use the same prompt all the time.
That's why I thought we might allow ollama to use prompt_cache.
https://github.com/ggerganov/llama.cpp/blob/f7f468a97dceec2f8fe8b1ed7a2091083446ebc7/common/common.cpp#L1508C22-L1508C38
Or is there already a way to control this / does ollama cache multiple prompts anyway?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1573/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1573/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2369
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2369/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2369/comments
|
https://api.github.com/repos/ollama/ollama/issues/2369/events
|
https://github.com/ollama/ollama/issues/2369
| 2,120,027,752
|
I_kwDOJ0Z1Ps5-XQ5o
| 2,369
|
Provide settings for allowed origins in Mac OS app
|
{
"login": "jemstelos",
"id": 90305214,
"node_id": "MDQ6VXNlcjkwMzA1MjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/90305214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jemstelos",
"html_url": "https://github.com/jemstelos",
"followers_url": "https://api.github.com/users/jemstelos/followers",
"following_url": "https://api.github.com/users/jemstelos/following{/other_user}",
"gists_url": "https://api.github.com/users/jemstelos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jemstelos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jemstelos/subscriptions",
"organizations_url": "https://api.github.com/users/jemstelos/orgs",
"repos_url": "https://api.github.com/users/jemstelos/repos",
"events_url": "https://api.github.com/users/jemstelos/events{/privacy}",
"received_events_url": "https://api.github.com/users/jemstelos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-02-06T06:03:51
| 2024-03-11T21:25:39
| 2024-03-11T21:25:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
hey there - been developing on a UI that calls the ollama server, and therefore needs its CORS origin to be allowed.
This issue (https://github.com/ollama/ollama/issues/300#issuecomment-1826434144) provided support for CORS origins to be configured when starting the server via command line by passing an environment variable (thank you!)
This requirement would cause friction for users who just run ollama via the mac app. Can we provide some kind of GUI setting for allowing origins in the mac app?
Thanks!
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2369/reactions",
"total_count": 6,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/2369/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6156
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6156/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6156/comments
|
https://api.github.com/repos/ollama/ollama/issues/6156/events
|
https://github.com/ollama/ollama/issues/6156
| 2,446,858,163
|
I_kwDOJ0Z1Ps6R2Bez
| 6,156
|
What is the difference between these models?
|
{
"login": "ldqrecord",
"id": 75522836,
"node_id": "MDQ6VXNlcjc1NTIyODM2",
"avatar_url": "https://avatars.githubusercontent.com/u/75522836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ldqrecord",
"html_url": "https://github.com/ldqrecord",
"followers_url": "https://api.github.com/users/ldqrecord/followers",
"following_url": "https://api.github.com/users/ldqrecord/following{/other_user}",
"gists_url": "https://api.github.com/users/ldqrecord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ldqrecord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ldqrecord/subscriptions",
"organizations_url": "https://api.github.com/users/ldqrecord/orgs",
"repos_url": "https://api.github.com/users/ldqrecord/repos",
"events_url": "https://api.github.com/users/ldqrecord/events{/privacy}",
"received_events_url": "https://api.github.com/users/ldqrecord/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-04T06:09:37
| 2024-09-13T18:06:54
| 2024-09-13T18:06:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6156/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6918
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6918/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6918/comments
|
https://api.github.com/repos/ollama/ollama/issues/6918/events
|
https://github.com/ollama/ollama/issues/6918
| 2,542,606,926
|
I_kwDOJ0Z1Ps6XjRpO
| 6,918
|
Unreliable free memory resulting in models not running
|
{
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/followers",
"following_url": "https://api.github.com/users/ddpasa/following{/other_user}",
"gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions",
"organizations_url": "https://api.github.com/users/ddpasa/orgs",
"repos_url": "https://api.github.com/users/ddpasa/repos",
"events_url": "https://api.github.com/users/ddpasa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddpasa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-09-23T13:13:57
| 2024-11-21T01:49:02
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
From what I understand, new versions of ollama compare the expected memory requirements of a model with the amount of free memory seen by ollama, and prints an error message if the model memory requirements are larger. This make a lof of sense.
However, the free memory on Linux is (from what I understand) is not a very reliable estimate. For the same model on the same machine, I have had cases where ollama ran successfully, or reported insufficient memory.
Is it possible to disable this feature entirely?
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
latest mainline
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6918/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6918/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7047
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7047/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7047/comments
|
https://api.github.com/repos/ollama/ollama/issues/7047/events
|
https://github.com/ollama/ollama/issues/7047
| 2,556,961,325
|
I_kwDOJ0Z1Ps6YaCIt
| 7,047
|
Uneven split across GPUs
|
{
"login": "KMouratidis",
"id": 26832680,
"node_id": "MDQ6VXNlcjI2ODMyNjgw",
"avatar_url": "https://avatars.githubusercontent.com/u/26832680?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMouratidis",
"html_url": "https://github.com/KMouratidis",
"followers_url": "https://api.github.com/users/KMouratidis/followers",
"following_url": "https://api.github.com/users/KMouratidis/following{/other_user}",
"gists_url": "https://api.github.com/users/KMouratidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMouratidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMouratidis/subscriptions",
"organizations_url": "https://api.github.com/users/KMouratidis/orgs",
"repos_url": "https://api.github.com/users/KMouratidis/repos",
"events_url": "https://api.github.com/users/KMouratidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMouratidis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-09-30T15:07:23
| 2024-10-17T18:49:41
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When loading a model across 2 GPUs, the layers are split evenly, but the GPU memory usage is quite a bit higher on the first GPU:
```
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3090 On | 00000000:01:00.0 Off | N/A |
| 55% 58C P0 188W / 275W | 23747MiB / 24576MiB | 27% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 3090 On | 00000000:02:00.0 Off | N/A |
| 53% 49C P0 179W / 275W | 22519MiB / 24576MiB | 26% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
```
This seems to be due do `CUDA0 compute buffer size` being ~1GB higher than the CUDA1 equivalent. When using text-generation-webui & llamacpp I'm able to specify a `50,51` split which results in the second GPU getting a layer or two more, thus balancing the memory usage, and allowing bigger models to run (or more layers to get offloaded). Does this exist? If not, is it possible?
<details>
<summary>Server log</summary>
```
Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.780Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=78 layers.model=81 layers.offload=70 layers.split=35,35 memory.available="[23.3 GiB 23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="52.4 GiB" memory.required.partial="45.9 GiB" memory.required.kv="5.0 GiB" memory.required.allocations="[23.0 GiB 22.9 GiB]" memory.weights.total="44.3 GiB" memory.weights.repeating="43.3 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="2.6 GiB" memory.graph.partial="2.6 GiB"
Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-c9ff230988a3c90f5beec5da2ebbd8b77d953389b587cb7398c6abd671b7562f --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 78 --flash-attn --parallel 1 --tensor-split 35,35 --port 37883"
Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=sched.go:449 msg="loaded runners" count=1
Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
Sep 30 15:00:25 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:25.783Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
Sep 30 15:00:25 dying-love 6f929011afab[1234]: INFO [main] build info | build=10 commit="3f6ec33" tid="139699938238464" timestamp=1727708425
Sep 30 15:00:25 dying-love 6f929011afab[1234]: INFO [main] system info | n_threads=16 n_threads_batch=16 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139699938238464" timestamp=1727708425 total_threads=32
Sep 30 15:00:25 dying-love 6f929011afab[1234]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="37883" tid="139699938238464" timestamp=1727708425
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: loaded meta data with 35 key-value pairs and 963 tensors from /root/.ollama/models/blobs/sha256-c9ff230988a3c90f5beec5da2ebbd8b77d953389b587cb7398c6abd671b7562f (version GGUF V3 (latest))
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 0: general.architecture str = qwen2
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 1: general.type str = model
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 2: general.name str = Qwen2.5 72B Instruct
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 3: general.finetune str = Instruct
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 4: general.basename str = Qwen2.5
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 5: general.size_label str = 72B
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 6: general.license str = other
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 7: general.license.name str = qwen
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7...
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 9: general.base_model.count u32 = 1
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 72B
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-72B
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"]
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"]
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 15: qwen2.block_count u32 = 80
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 16: qwen2.context_length u32 = 32768
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 17: qwen2.embedding_length u32 = 8192
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 29568
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 64
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 8
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 23: general.file_type u32 = 14
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - kv 34: general.quantization_version u32 = 2
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type f32: 401 tensors
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q5_0: 70 tensors
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q5_1: 10 tensors
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q4_K: 401 tensors
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q5_K: 80 tensors
Sep 30 15:00:25 dying-love 6f929011afab[1234]: llama_model_loader: - type q6_K: 1 tensors
Sep 30 15:00:26 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:26.034Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_vocab: special tokens cache size = 22
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_vocab: token to piece cache size = 0.9310 MB
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: format = GGUF V3 (latest)
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: arch = qwen2
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab type = BPE
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_vocab = 152064
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_merges = 151387
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: vocab_only = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_train = 32768
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd = 8192
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_layer = 80
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head = 64
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_head_kv = 8
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_rot = 128
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_swa = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_k = 128
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_head_v = 128
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_gqa = 8
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_k_gqa = 1024
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_embd_v_gqa = 1024
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_eps = 0.0e+00
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: f_logit_scale = 0.0e+00
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ff = 29568
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_expert_used = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: causal attn = 1
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: pooling type = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: rope type = 2
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: rope scaling = linear
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_base_train = 1000000.0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: freq_scale_train = 1
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: n_ctx_orig_yarn = 32768
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: rope_finetuned = unknown
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_conv = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_inner = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_d_state = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_rank = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model type = 70B
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model ftype = Q4_K - Small
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model params = 72.71 B
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: model size = 40.87 GiB (4.83 BPW)
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: general.name = Qwen2.5 72B Instruct
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: EOS token = 151645 '<|im_end|>'
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: LF token = 148848 'ÄĬ'
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: EOT token = 151645 '<|im_end|>'
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_print_meta: max token length = 256
Sep 30 15:00:26 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Sep 30 15:00:26 dying-love 6f929011afab[1234]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 30 15:00:26 dying-love 6f929011afab[1234]: ggml_cuda_init: found 2 CUDA devices:
Sep 30 15:00:26 dying-love 6f929011afab[1234]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Sep 30 15:00:26 dying-love 6f929011afab[1234]: Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Sep 30 15:00:26 dying-love 6f929011afab[1234]: llm_load_tensors: ggml ctx size = 1.27 MiB
Sep 30 15:00:27 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:27.490Z level=INFO source=server.go:621 msg="waiting for server to become available" s>
Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: offloading 78 repeating layers to GPU
Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: offloaded 78/81 layers to GPU
Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: CPU buffer size = 41850.31 MiB
Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: CUDA0 buffer size = 19646.28 MiB
Sep 30 15:00:29 dying-love 6f929011afab[1234]: llm_load_tensors: CUDA1 buffer size = 19530.78 MiB
Sep 30 15:00:29 dying-love 6f929011afab[1234]: time=2024-09-30T15:00:29.545Z level=INFO source=server.go:621 msg="waiting for server to become available" s>
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ctx = 16384
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_batch = 512
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: n_ubatch = 512
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: flash_attn = 1
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_base = 1000000.0
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: freq_scale = 1
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_kv_cache_init: CUDA_Host KV buffer size = 128.00 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_kv_cache_init: CUDA0 KV buffer size = 2496.00 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_kv_cache_init: CUDA1 KV buffer size = 2496.00 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: KV self size = 5120.00 MiB, K (f16): 2560.00 MiB, V (f16): 2560.00 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA_Host output buffer size = 0.61 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA0 compute buffer size = 1287.53 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA1 compute buffer size = 163.50 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: CUDA_Host compute buffer size = 48.01 MiB
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: graph nodes = 2487
Sep 30 15:00:32 dying-love 6f929011afab[1234]: llama_new_context_with_model: graph splits = 33
```
</details>
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7047/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3382
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3382/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3382/comments
|
https://api.github.com/repos/ollama/ollama/issues/3382/events
|
https://github.com/ollama/ollama/pull/3382
| 2,212,359,236
|
PR_kwDOJ0Z1Ps5rAg0R
| 3,382
|
Security fix: examples/langchain-python-rag-privategpt/requirements.txt to reduce vulnerabilities
|
{
"login": "jimscard",
"id": 26580570,
"node_id": "MDQ6VXNlcjI2NTgwNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/26580570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimscard",
"html_url": "https://github.com/jimscard",
"followers_url": "https://api.github.com/users/jimscard/followers",
"following_url": "https://api.github.com/users/jimscard/following{/other_user}",
"gists_url": "https://api.github.com/users/jimscard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimscard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimscard/subscriptions",
"organizations_url": "https://api.github.com/users/jimscard/orgs",
"repos_url": "https://api.github.com/users/jimscard/repos",
"events_url": "https://api.github.com/users/jimscard/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimscard/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-28T05:04:47
| 2024-06-09T17:58:09
| 2024-06-09T17:58:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3382",
"html_url": "https://github.com/ollama/ollama/pull/3382",
"diff_url": "https://github.com/ollama/ollama/pull/3382.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3382.patch",
"merged_at": "2024-06-09T17:58:09"
}
|
The following vulnerabilities are fixed by pinning transitive dependencies:
- https://snyk.io/vuln/SNYK-PYTHON-NUMPY-2321964
- https://snyk.io/vuln/SNYK-PYTHON-NUMPY-2321966
- https://snyk.io/vuln/SNYK-PYTHON-NUMPY-2321970
@[snyk-bot](https://github.com/ollama/ollama/commits?author=snyk-bot)
snyk-bot committed 3 days ago
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3382/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5805
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5805/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5805/comments
|
https://api.github.com/repos/ollama/ollama/issues/5805/events
|
https://github.com/ollama/ollama/pull/5805
| 2,420,517,588
|
PR_kwDOJ0Z1Ps5197ay
| 5,805
|
Update llama.cpp submodule commit to `d94c6e0c`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-20T02:49:45
| 2024-07-22T16:42:02
| 2024-07-22T16:42:00
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5805",
"html_url": "https://github.com/ollama/ollama/pull/5805",
"diff_url": "https://github.com/ollama/ollama/pull/5805.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5805.patch",
"merged_at": "2024-07-22T16:42:00"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5805/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4973
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4973/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4973/comments
|
https://api.github.com/repos/ollama/ollama/issues/4973/events
|
https://github.com/ollama/ollama/issues/4973
| 2,345,797,727
|
I_kwDOJ0Z1Ps6L0ghf
| 4,973
|
OLLAMA_MODEL_DIR is not reflecting on MacOS
|
{
"login": "yusufaly",
"id": 758596,
"node_id": "MDQ6VXNlcjc1ODU5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/758596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yusufaly",
"html_url": "https://github.com/yusufaly",
"followers_url": "https://api.github.com/users/yusufaly/followers",
"following_url": "https://api.github.com/users/yusufaly/following{/other_user}",
"gists_url": "https://api.github.com/users/yusufaly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yusufaly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yusufaly/subscriptions",
"organizations_url": "https://api.github.com/users/yusufaly/orgs",
"repos_url": "https://api.github.com/users/yusufaly/repos",
"events_url": "https://api.github.com/users/yusufaly/events{/privacy}",
"received_events_url": "https://api.github.com/users/yusufaly/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-11T08:53:48
| 2024-06-13T23:44:03
| 2024-06-13T23:44:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I tried both the executable or home brew ollama and in both cases OLLAMA_MODEL_DIR is not reflecting.
launchctl getenv OLLAMA_MODEL_DIR does show the location and I presisted it on a plist file to work after restart.
I have also tried the old school export OLLAMA_MODEL_DIR= in the ~/.zshrc file to no luck.
Not sure what else to do.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4973/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1040
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1040/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1040/comments
|
https://api.github.com/repos/ollama/ollama/issues/1040/events
|
https://github.com/ollama/ollama/issues/1040
| 1,982,968,689
|
I_kwDOJ0Z1Ps52MbNx
| 1,040
|
Add the deepseek model to the library
|
{
"login": "Nan-Do",
"id": 3844058,
"node_id": "MDQ6VXNlcjM4NDQwNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3844058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nan-Do",
"html_url": "https://github.com/Nan-Do",
"followers_url": "https://api.github.com/users/Nan-Do/followers",
"following_url": "https://api.github.com/users/Nan-Do/following{/other_user}",
"gists_url": "https://api.github.com/users/Nan-Do/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nan-Do/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nan-Do/subscriptions",
"organizations_url": "https://api.github.com/users/Nan-Do/orgs",
"repos_url": "https://api.github.com/users/Nan-Do/repos",
"events_url": "https://api.github.com/users/Nan-Do/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nan-Do/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 9
| 2023-11-08T07:56:05
| 2023-11-22T16:47:45
| 2023-11-21T00:05:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The deepseek model is currently the best coding open source model on the HumanEval dataset second only to ChatGPT4 by a little margin.
https://www.deepseek.com/
https://huggingface.co/deepseek-ai
https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
https://evalplus.github.io/leaderboard.html
There are 7b and 33b model variants, the quantized versions can be found here:
https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF
https://huggingface.co/TheBloke/deepseek-coder-33B-instruct-GGUF
This is a possible valid modelfile including a valid prompt template:
```
FROM ./deepseek-coder-33b-instruct.Q4_K_M.gguf
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.2
# set the system prompt
TEMPLATE """{{ .System }}
### Instruction:
{{ .Prompt }}
### Response:
"""
SYSTEM """You are an advanced AI programming assistant."""
```
The authors propose a longer version of this template, which is more restrictive, as well as other variants for other kinds of inference
https://github.com/deepseek-ai/deepseek-coder#3-chat-model-inference
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1040/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1040/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7406
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7406/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7406/comments
|
https://api.github.com/repos/ollama/ollama/issues/7406/events
|
https://github.com/ollama/ollama/pull/7406
| 2,619,671,166
|
PR_kwDOJ0Z1Ps6AKi0M
| 7,406
|
Feature/reranker
|
{
"login": "hughescr",
"id": 46348,
"node_id": "MDQ6VXNlcjQ2MzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/46348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hughescr",
"html_url": "https://github.com/hughescr",
"followers_url": "https://api.github.com/users/hughescr/followers",
"following_url": "https://api.github.com/users/hughescr/following{/other_user}",
"gists_url": "https://api.github.com/users/hughescr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hughescr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hughescr/subscriptions",
"organizations_url": "https://api.github.com/users/hughescr/orgs",
"repos_url": "https://api.github.com/users/hughescr/repos",
"events_url": "https://api.github.com/users/hughescr/events{/privacy}",
"received_events_url": "https://api.github.com/users/hughescr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 16
| 2024-10-28T22:08:23
| 2024-11-01T22:42:03
| 2024-11-01T22:41:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7406",
"html_url": "https://github.com/ollama/ollama/pull/7406",
"diff_url": "https://github.com/ollama/ollama/pull/7406.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7406.patch",
"merged_at": null
}
|
Implement re-ranking by calling into runner's embedding implementation. Uses latest HEAD of llama.cpp and updates all the vendor patches. Tested on MacOS only, I don't have access to a CUDA/other to make sure that all the updated vendor patches are correct.... it's a little but painful to compare them, but I *think* I did them right. Haven't worked with manually managing patch stacks within version control for over a decade 🤢
|
{
"login": "hughescr",
"id": 46348,
"node_id": "MDQ6VXNlcjQ2MzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/46348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hughescr",
"html_url": "https://github.com/hughescr",
"followers_url": "https://api.github.com/users/hughescr/followers",
"following_url": "https://api.github.com/users/hughescr/following{/other_user}",
"gists_url": "https://api.github.com/users/hughescr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hughescr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hughescr/subscriptions",
"organizations_url": "https://api.github.com/users/hughescr/orgs",
"repos_url": "https://api.github.com/users/hughescr/repos",
"events_url": "https://api.github.com/users/hughescr/events{/privacy}",
"received_events_url": "https://api.github.com/users/hughescr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7406/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7406/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6078
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6078/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6078/comments
|
https://api.github.com/repos/ollama/ollama/issues/6078/events
|
https://github.com/ollama/ollama/issues/6078
| 2,438,384,907
|
I_kwDOJ0Z1Ps6RVs0L
| 6,078
|
Refactor num_parallel tracking in scheduler
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-07-30T17:58:58
| 2024-08-03T11:43:08
| null |
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The current code passes a pointer and mutates it as we try to determine what the optimal parallel setting is, which makes the code hard to follow.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6078/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8252
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8252/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8252/comments
|
https://api.github.com/repos/ollama/ollama/issues/8252/events
|
https://github.com/ollama/ollama/issues/8252
| 2,760,214,968
|
I_kwDOJ0Z1Ps6khYm4
| 8,252
|
BUG: Error when creating Modelfile - Error: unknown parameter 'tfs_z'
|
{
"login": "holger777",
"id": 75024086,
"node_id": "MDQ6VXNlcjc1MDI0MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/75024086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/holger777",
"html_url": "https://github.com/holger777",
"followers_url": "https://api.github.com/users/holger777/followers",
"following_url": "https://api.github.com/users/holger777/following{/other_user}",
"gists_url": "https://api.github.com/users/holger777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/holger777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/holger777/subscriptions",
"organizations_url": "https://api.github.com/users/holger777/orgs",
"repos_url": "https://api.github.com/users/holger777/repos",
"events_url": "https://api.github.com/users/holger777/events{/privacy}",
"received_events_url": "https://api.github.com/users/holger777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-12-26T23:15:44
| 2024-12-27T17:59:31
| 2024-12-27T17:57:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Former working Modelfiles are missing now tfs_z when I create them with:
```
Error: unknown parameter 'tfs_z'
```
All other parameters work as expected. Is it no longer supported?
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.5.4
|
{
"login": "holger777",
"id": 75024086,
"node_id": "MDQ6VXNlcjc1MDI0MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/75024086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/holger777",
"html_url": "https://github.com/holger777",
"followers_url": "https://api.github.com/users/holger777/followers",
"following_url": "https://api.github.com/users/holger777/following{/other_user}",
"gists_url": "https://api.github.com/users/holger777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/holger777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/holger777/subscriptions",
"organizations_url": "https://api.github.com/users/holger777/orgs",
"repos_url": "https://api.github.com/users/holger777/repos",
"events_url": "https://api.github.com/users/holger777/events{/privacy}",
"received_events_url": "https://api.github.com/users/holger777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8252/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/650
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/650/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/650/comments
|
https://api.github.com/repos/ollama/ollama/issues/650/events
|
https://github.com/ollama/ollama/pull/650
| 1,919,927,490
|
PR_kwDOJ0Z1Ps5bknCU
| 650
|
remove unused push/pull params
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-29T21:26:11
| 2023-09-29T21:27:20
| 2023-09-29T21:27:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/650",
"html_url": "https://github.com/ollama/ollama/pull/650",
"diff_url": "https://github.com/ollama/ollama/pull/650.diff",
"patch_url": "https://github.com/ollama/ollama/pull/650.patch",
"merged_at": "2023-09-29T21:27:19"
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/650/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7814
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7814/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7814/comments
|
https://api.github.com/repos/ollama/ollama/issues/7814/events
|
https://github.com/ollama/ollama/issues/7814
| 2,687,508,117
|
I_kwDOJ0Z1Ps6gMB6V
| 7,814
|
Flag to prevent infinite generation in Ollama API
|
{
"login": "gwpl",
"id": 221403,
"node_id": "MDQ6VXNlcjIyMTQwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/221403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gwpl",
"html_url": "https://github.com/gwpl",
"followers_url": "https://api.github.com/users/gwpl/followers",
"following_url": "https://api.github.com/users/gwpl/following{/other_user}",
"gists_url": "https://api.github.com/users/gwpl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gwpl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gwpl/subscriptions",
"organizations_url": "https://api.github.com/users/gwpl/orgs",
"repos_url": "https://api.github.com/users/gwpl/repos",
"events_url": "https://api.github.com/users/gwpl/events{/privacy}",
"received_events_url": "https://api.github.com/users/gwpl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-11-24T10:36:50
| 2024-12-14T15:33:45
| 2024-12-14T15:33:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Problem statement: Seems that currently "max tokens" is per model parameter, instead of standard ollama API parameter that is meant to always work. It makes it harder for integrations that use more and more new and new models to provide failsafe preventing infinite inference.
I am playing with SmolLM2 models and they endup pretty often in infinite generation loops...
I see that currently setting maximum tokens limit is model dependent.
I wondered if we could have some failsafe flags to limit either by tokens +/- or resources (cpu time?),
so server will stop computation after certain limit?
As goal is failsafe, then it does not have to be exact (e.g. one sets 128000 tokens and it generates 132000 tokens... it's ok, as long as server will start stopping inference as soon as realized that threshold was crossed, to prevent infinite inference...)
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7814/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7814/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2422
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2422/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2422/comments
|
https://api.github.com/repos/ollama/ollama/issues/2422/events
|
https://github.com/ollama/ollama/pull/2422
| 2,126,554,173
|
PR_kwDOJ0Z1Ps5mczuZ
| 2,422
|
More robust shutdown
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-09T06:29:27
| 2024-02-12T22:05:10
| 2024-02-12T22:05:06
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2422",
"html_url": "https://github.com/ollama/ollama/pull/2422",
"diff_url": "https://github.com/ollama/ollama/pull/2422.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2422.patch",
"merged_at": "2024-02-12T22:05:06"
}
|
Make sure that when a shutdown signal comes, we shutdown quickly instead of waiting for a potentially long exchange to wrap up.
My initial strategy was going to be multiple signals to trigger a more aggressive shutdown, but that turned into a much more invasive change to try to recover once shutting down had already started, so I aborted that approach. This now takes a simpler approach to simply stop new requests from coming in, canceling whatever is in flight at the next completion, and then shutting down once no requests are actively being processed. If we want to refine this in the future to have the double-signal strategy, we can add that incrementally by just blocking new requests from coming in on the first signal, and on a second signal, cancel tasks that are still iterating in completion.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2422/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7488
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7488/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7488/comments
|
https://api.github.com/repos/ollama/ollama/issues/7488/events
|
https://github.com/ollama/ollama/issues/7488
| 2,631,972,683
|
I_kwDOJ0Z1Ps6c4LdL
| 7,488
|
Avoid clearing response content when parsing tools is unnecessary
|
{
"login": "ouariachi",
"id": 92974022,
"node_id": "U_kgDOBYqrxg",
"avatar_url": "https://avatars.githubusercontent.com/u/92974022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ouariachi",
"html_url": "https://github.com/ouariachi",
"followers_url": "https://api.github.com/users/ouariachi/followers",
"following_url": "https://api.github.com/users/ouariachi/following{/other_user}",
"gists_url": "https://api.github.com/users/ouariachi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ouariachi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ouariachi/subscriptions",
"organizations_url": "https://api.github.com/users/ouariachi/orgs",
"repos_url": "https://api.github.com/users/ouariachi/repos",
"events_url": "https://api.github.com/users/ouariachi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ouariachi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-04T06:41:51
| 2024-12-02T14:52:05
| 2024-12-02T14:52:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I just found [this code](https://github.com/ollama/ollama/blob/18237be9b2a4f8b060b9888996a8cc8b02796290/server/routes.go#L1511):
```go
resp.Message.Content = sb.String()
if len(req.Tools) > 0 {
if toolCalls, ok := m.parseToolCalls(sb.String()); ok {
resp.Message.ToolCalls = toolCalls
resp.Message.Content = ""
}
}
```
I don't understand why the content of the response is forced to be empty if there are tools. Wouldn't it be better if the tools are only parsed when needed?
What I need is for the AI to interpret the intent of the message and parse a tool(s) if it thinks it is necessary. For example, if I tell the AI “Hello”, don't parse the tool and return a normal message. But if I tell it “What events do I have today?”, it returns the parsed “get_calendar” tool and the empty message content.
### Request example
```json
{
"model": "llama3.2",
"messages": [ { "role": "user", "content": "Hi!" } ],
"stream": false,
"tools": [
{
"type": "function",
"function": {
"name": "get_calendar",
"description": "Get the user's calendar with all his events."
}
}
]
}
```
### Actual response
```json
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"function": {
"name": "get_calendar",
"arguments": {}
}
}
]
}
```
### Response I want
```json
"message": {
"role": "assistant",
"content": "How can I assist you today?"
}
```
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7488/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/2223
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2223/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2223/comments
|
https://api.github.com/repos/ollama/ollama/issues/2223/events
|
https://github.com/ollama/ollama/issues/2223
| 2,103,174,258
|
I_kwDOJ0Z1Ps59W-Ry
| 2,223
|
Hello, I have a problem where the lines cannot match and the stack is messed up.
|
{
"login": "FennikzZ",
"id": 59172521,
"node_id": "MDQ6VXNlcjU5MTcyNTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/59172521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FennikzZ",
"html_url": "https://github.com/FennikzZ",
"followers_url": "https://api.github.com/users/FennikzZ/followers",
"following_url": "https://api.github.com/users/FennikzZ/following{/other_user}",
"gists_url": "https://api.github.com/users/FennikzZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FennikzZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FennikzZ/subscriptions",
"organizations_url": "https://api.github.com/users/FennikzZ/orgs",
"repos_url": "https://api.github.com/users/FennikzZ/repos",
"events_url": "https://api.github.com/users/FennikzZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/FennikzZ/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-01-27T03:44:56
| 2024-03-12T21:28:19
| 2024-03-12T21:28:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
```
#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
typedef struct Node {
char c;
struct Node* next;
} Node;
Node* head = NULL;
Node* tail = NULL;
int count = 0;
void appendList(char ch) {
Node* n = (Node*)malloc(sizeof(Node));
if (n == NULL) {
printf("Memory allocation failed\n");
return;
}
n->next = NULL;
n->c = ch;
if (head == NULL) {
head = n;
} else {
tail->next = n;
}
tail = n;
count++;
}
void printList() {
Node* current = head;
while (current != NULL) {
printf("%c", current->c);
current = current->next;
}
printf("\n");
}
void destroyList(){
Node *ptr;
while(count>0){
ptr=head;
head=head->next;
count--;
free(ptr);
}
head = NULL;
tail = NULL;
}
typedef struct nd {
char c;
struct nd *next;
} node;
node *top = NULL;
void push(char x) {
node *n = malloc(sizeof(node));
if (n == NULL) {
printf("Memory allocation failed\n");
return;
}
n->next = top;
top = n;
n->c = x;
}
char pop() {
char p;
node *n;
if (top == NULL) {
printf("Stack is empty\n");
return 0;
}
n = top;
top = top->next;
p = n->c;
free(n);
if(p != '('){
appendList(p);
}
return p;
}
char stackTop() {
if (top == NULL) {
return 0;
} else {
return top->c;
}
}
void printStack() {
node* temp = top;
while (temp != NULL) {
printf("%c ", temp->c);
temp = temp->next;
}
printf("\t\t");
}
int prec(char c) {
if (c == '^')
return 3;
else if (c == '/' || c == '*')
return 2;
else if (c == '+' || c == '-')
return 1;
else
return -1;
}
char associativity(char c) {
if (c == '^')
return 'R';
return 'L';
}
void infixToPostfix(){
char ch;
int step = 1;
printf("Infix : ");
while ((ch = getchar()) != '\n') {
if(step == 1){
printf("\n%-15s%-15s%-15s%-15s\n\n","STEP","SYMBOL","STACK","OUTPUT");
}
if (isdigit(ch) || isalpha(ch)) {
appendList(ch);
} else if (ch == '(') {
push(ch);
} else if (ch == ')') {
while (stackTop() != '(') {
pop();
}
pop();
} else {
while (top != NULL && (prec(ch) < prec(stackTop()) || (prec(ch) == prec(stackTop()) && associativity(ch) == 'L'))) {
pop();
}
push(ch);
}
printf("%2d\t\t%c\t\t", step, ch);
printStack();
printList();
step++;
}
while (top != NULL) {
pop();
}
printf("%2d\t\t\t\t\t\t", step);
printList();
printf("\n");
printf("Postfix : ");
printList();
printf("___________________________________________________________\n\n");
}
void main() {
printf("Infix to Postfix converter.\n");
while (1){
infixToPostfix();
destroyList();
}
}
```
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2223/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5189
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5189/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5189/comments
|
https://api.github.com/repos/ollama/ollama/issues/5189/events
|
https://github.com/ollama/ollama/issues/5189
| 2,364,790,005
|
I_kwDOJ0Z1Ps6M89T1
| 5,189
|
Deepseek-Coder-v2 Instruct Chat Template
|
{
"login": "RussellCanfield",
"id": 17344904,
"node_id": "MDQ6VXNlcjE3MzQ0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/17344904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RussellCanfield",
"html_url": "https://github.com/RussellCanfield",
"followers_url": "https://api.github.com/users/RussellCanfield/followers",
"following_url": "https://api.github.com/users/RussellCanfield/following{/other_user}",
"gists_url": "https://api.github.com/users/RussellCanfield/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RussellCanfield/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RussellCanfield/subscriptions",
"organizations_url": "https://api.github.com/users/RussellCanfield/orgs",
"repos_url": "https://api.github.com/users/RussellCanfield/repos",
"events_url": "https://api.github.com/users/RussellCanfield/events{/privacy}",
"received_events_url": "https://api.github.com/users/RussellCanfield/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-20T16:00:55
| 2024-06-21T14:42:46
| 2024-06-21T14:42:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I was testing out Deepseek Coder V2 using the Ollama /api/chat endpoint and I get the following error. Does this model in general support chat?
**Model:**
deepseek-coder-v2:16b-lite-instruct-q4_0
**Error:**
ERROR [validate_model_chat_template] The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses | tid="0x1ee4b4c00" timestamp=1718899061
The request ends up failing, but that is captured from the Ollama log files.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.44
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5189/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/109
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/109/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/109/comments
|
https://api.github.com/repos/ollama/ollama/issues/109/events
|
https://github.com/ollama/ollama/pull/109
| 1,810,909,768
|
PR_kwDOJ0Z1Ps5V1ks0
| 109
|
fix memory leak in create
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-19T00:15:37
| 2023-07-19T00:25:22
| 2023-07-19T00:25:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/109",
"html_url": "https://github.com/ollama/ollama/pull/109",
"diff_url": "https://github.com/ollama/ollama/pull/109.diff",
"patch_url": "https://github.com/ollama/ollama/pull/109.patch",
"merged_at": "2023-07-19T00:25:19"
}
|
do not buffer the model into memory. instead use `io.Copy` to pass the contents directly to the destination writer, whether that is the hasher or file writer
this also significantly improve the hashing performance
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/109/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6941
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6941/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6941/comments
|
https://api.github.com/repos/ollama/ollama/issues/6941/events
|
https://github.com/ollama/ollama/pull/6941
| 2,546,443,621
|
PR_kwDOJ0Z1Ps58ky8m
| 6,941
|
doc: capture numeric group requirement
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-24T21:41:10
| 2024-11-12T17:13:26
| 2024-11-12T17:13:23
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6941",
"html_url": "https://github.com/ollama/ollama/pull/6941",
"diff_url": "https://github.com/ollama/ollama/pull/6941.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6941.patch",
"merged_at": "2024-11-12T17:13:23"
}
|
Docker uses the container filesystem for name resolution, so we can't guide users to use the name of the host group. Instead they must specify the numeric ID.
Ref #6685
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6941/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7676
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7676/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7676/comments
|
https://api.github.com/repos/ollama/ollama/issues/7676/events
|
https://github.com/ollama/ollama/pull/7676
| 2,660,484,652
|
PR_kwDOJ0Z1Ps6B_UdZ
| 7,676
|
server: allow mixed-case model names on push, pull, cp, and create
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-15T01:37:40
| 2024-11-19T23:05:58
| 2024-11-19T23:05:57
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7676",
"html_url": "https://github.com/ollama/ollama/pull/7676",
"diff_url": "https://github.com/ollama/ollama/pull/7676.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7676.patch",
"merged_at": "2024-11-19T23:05:57"
}
|
This change allows for mixed-case model names to be pushed, pulled, copied, and created, which was previously disallowed because the Ollama registry was backed by a Docker registry that enforced a naming convention that disallowed mixed-case names, which is no longer the case.
This does not break existing, intended, behaviors.
Also, make TestCase test a story of creating, updating, pulling, and copying a model with case variations, ensuring the model's manifest is updated correctly, and not duplicated across different files with different case variations.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7676/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2238
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2238/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2238/comments
|
https://api.github.com/repos/ollama/ollama/issues/2238/events
|
https://github.com/ollama/ollama/issues/2238
| 2,103,980,836
|
I_kwDOJ0Z1Ps59aDMk
| 2,238
|
Is there a flag to change the default port?
|
{
"login": "thebigbone",
"id": 95130644,
"node_id": "U_kgDOBauUFA",
"avatar_url": "https://avatars.githubusercontent.com/u/95130644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thebigbone",
"html_url": "https://github.com/thebigbone",
"followers_url": "https://api.github.com/users/thebigbone/followers",
"following_url": "https://api.github.com/users/thebigbone/following{/other_user}",
"gists_url": "https://api.github.com/users/thebigbone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thebigbone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thebigbone/subscriptions",
"organizations_url": "https://api.github.com/users/thebigbone/orgs",
"repos_url": "https://api.github.com/users/thebigbone/repos",
"events_url": "https://api.github.com/users/thebigbone/events{/privacy}",
"received_events_url": "https://api.github.com/users/thebigbone/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-28T05:39:20
| 2024-01-28T06:03:29
| 2024-01-28T06:03:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi there. I wanted to change the default port. Is it possible?
|
{
"login": "thebigbone",
"id": 95130644,
"node_id": "U_kgDOBauUFA",
"avatar_url": "https://avatars.githubusercontent.com/u/95130644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thebigbone",
"html_url": "https://github.com/thebigbone",
"followers_url": "https://api.github.com/users/thebigbone/followers",
"following_url": "https://api.github.com/users/thebigbone/following{/other_user}",
"gists_url": "https://api.github.com/users/thebigbone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thebigbone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thebigbone/subscriptions",
"organizations_url": "https://api.github.com/users/thebigbone/orgs",
"repos_url": "https://api.github.com/users/thebigbone/repos",
"events_url": "https://api.github.com/users/thebigbone/events{/privacy}",
"received_events_url": "https://api.github.com/users/thebigbone/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2238/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/320
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/320/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/320/comments
|
https://api.github.com/repos/ollama/ollama/issues/320/events
|
https://github.com/ollama/ollama/issues/320
| 1,845,977,400
|
I_kwDOJ0Z1Ps5uB2E4
| 320
|
Cannot create a model based on llama2:70b
|
{
"login": "asarturas",
"id": 915284,
"node_id": "MDQ6VXNlcjkxNTI4NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/915284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asarturas",
"html_url": "https://github.com/asarturas",
"followers_url": "https://api.github.com/users/asarturas/followers",
"following_url": "https://api.github.com/users/asarturas/following{/other_user}",
"gists_url": "https://api.github.com/users/asarturas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asarturas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asarturas/subscriptions",
"organizations_url": "https://api.github.com/users/asarturas/orgs",
"repos_url": "https://api.github.com/users/asarturas/repos",
"events_url": "https://api.github.com/users/asarturas/events{/privacy}",
"received_events_url": "https://api.github.com/users/asarturas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 6
| 2023-08-10T22:32:47
| 2023-09-05T08:14:50
| 2023-08-23T23:50:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If we change example [devops-engineer](https://github.com/jmorganca/ollama/blob/main/examples/devops-engineer/Modelfile) model slightly to use 70b model instead of 13b, like:
```
# Modelfile for creating a devops engineer assistant
# Run `ollama create devops-engineer -f ./Modelfile` and then `ollama run devops-engineer` and enter a topic
FROM llama2:13b
PARAMETER temperature 1
SYSTEM """
You are a senior devops engineer, acting as an assistant. You offer help with cloud technologies like: Terraform, AWS, kubernetes, python. You answer with code examples when possible
"""
```
Then on it generates everything fine, but it fails with an error:
```
$ ollama run devops
>>> hello
Error: failed to load model
For more details, check the error logs at /Users/ollama/.ollama/logs/server.log
```
and the diagnostics is:
```
error loading model: llama.cpp: tensor 'layers.0.attention.wk.weight' has wrong shape; expected 8192 x 8192, got 8192 x 1024
llama_load_model_from_file: failed to load model
```
while same Modelfile with originally used b13 works fine.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/320/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1380
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1380/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1380/comments
|
https://api.github.com/repos/ollama/ollama/issues/1380/events
|
https://github.com/ollama/ollama/issues/1380
| 2,024,736,493
|
I_kwDOJ0Z1Ps54rwbt
| 1,380
|
Is it possible to add model and prompt params like max_tokens or temperature?
|
{
"login": "TumblerWarren",
"id": 137818183,
"node_id": "U_kgDOCDbwRw",
"avatar_url": "https://avatars.githubusercontent.com/u/137818183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TumblerWarren",
"html_url": "https://github.com/TumblerWarren",
"followers_url": "https://api.github.com/users/TumblerWarren/followers",
"following_url": "https://api.github.com/users/TumblerWarren/following{/other_user}",
"gists_url": "https://api.github.com/users/TumblerWarren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TumblerWarren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TumblerWarren/subscriptions",
"organizations_url": "https://api.github.com/users/TumblerWarren/orgs",
"repos_url": "https://api.github.com/users/TumblerWarren/repos",
"events_url": "https://api.github.com/users/TumblerWarren/events{/privacy}",
"received_events_url": "https://api.github.com/users/TumblerWarren/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-04T21:03:02
| 2023-12-29T02:55:07
| 2023-12-04T21:46:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be great if I could set params like temperature and max_tokens.
Also is it possible to turn of streaming?
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1380/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5659
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5659/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5659/comments
|
https://api.github.com/repos/ollama/ollama/issues/5659/events
|
https://github.com/ollama/ollama/issues/5659
| 2,406,544,500
|
I_kwDOJ0Z1Ps6PcPR0
| 5,659
|
Using both CPU + GPU for Parallel Models
|
{
"login": "owenzhao",
"id": 2182896,
"node_id": "MDQ6VXNlcjIxODI4OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2182896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/owenzhao",
"html_url": "https://github.com/owenzhao",
"followers_url": "https://api.github.com/users/owenzhao/followers",
"following_url": "https://api.github.com/users/owenzhao/following{/other_user}",
"gists_url": "https://api.github.com/users/owenzhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/owenzhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/owenzhao/subscriptions",
"organizations_url": "https://api.github.com/users/owenzhao/orgs",
"repos_url": "https://api.github.com/users/owenzhao/repos",
"events_url": "https://api.github.com/users/owenzhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/owenzhao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-07-13T00:51:04
| 2024-11-13T18:42:08
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As starting 0.2, Ollama support running in parallel models. That makes memories are more valuable than before. Even more, in operation system like Windows, if we you GPU, the memory of GPU is fixed unless we purchase another one.
Gladly, the system memory is much more cheaper than replacing the GPU and Ollama can work with CPU only with a lower speed. So if we could use both CPU + GPU at the same time, we can add more system memories and get more parallel models.
The approach should be like:
1. Using GPU only, but use system memory as model cache. That will make switch model more quickly.
2. Using GPU first, after the memories of GPU are full, using CPU as second method.
That all what I thought. More ideas are appreciated.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5659/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5659/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5097
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5097/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5097/comments
|
https://api.github.com/repos/ollama/ollama/issues/5097/events
|
https://github.com/ollama/ollama/pull/5097
| 2,357,086,171
|
PR_kwDOJ0Z1Ps5yrhrP
| 5,097
|
docs:add ollamaGen to Web and Desktop section
|
{
"login": "moriire",
"id": 56216197,
"node_id": "MDQ6VXNlcjU2MjE2MTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/56216197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moriire",
"html_url": "https://github.com/moriire",
"followers_url": "https://api.github.com/users/moriire/followers",
"following_url": "https://api.github.com/users/moriire/following{/other_user}",
"gists_url": "https://api.github.com/users/moriire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moriire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moriire/subscriptions",
"organizations_url": "https://api.github.com/users/moriire/orgs",
"repos_url": "https://api.github.com/users/moriire/repos",
"events_url": "https://api.github.com/users/moriire/events{/privacy}",
"received_events_url": "https://api.github.com/users/moriire/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-06-17T11:18:23
| 2024-10-08T10:43:44
| 2024-10-08T10:43:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5097",
"html_url": "https://github.com/ollama/ollama/pull/5097",
"diff_url": "https://github.com/ollama/ollama/pull/5097.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5097.patch",
"merged_at": null
}
| null |
{
"login": "moriire",
"id": 56216197,
"node_id": "MDQ6VXNlcjU2MjE2MTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/56216197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moriire",
"html_url": "https://github.com/moriire",
"followers_url": "https://api.github.com/users/moriire/followers",
"following_url": "https://api.github.com/users/moriire/following{/other_user}",
"gists_url": "https://api.github.com/users/moriire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moriire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moriire/subscriptions",
"organizations_url": "https://api.github.com/users/moriire/orgs",
"repos_url": "https://api.github.com/users/moriire/repos",
"events_url": "https://api.github.com/users/moriire/events{/privacy}",
"received_events_url": "https://api.github.com/users/moriire/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5097/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5794
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5794/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5794/comments
|
https://api.github.com/repos/ollama/ollama/issues/5794/events
|
https://github.com/ollama/ollama/issues/5794
| 2,418,723,959
|
I_kwDOJ0Z1Ps6QKsx3
| 5,794
|
Expose model capabilities via /api/tags and /v1/models[/model]
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 1
| 2024-07-19T12:00:49
| 2024-11-06T01:04:02
| null |
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be convenient if the capabilities of a model (completion, tools, insert, \<future caps\>) were made available so that clients can adjust API calls.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5794/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5794/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/339
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/339/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/339/comments
|
https://api.github.com/repos/ollama/ollama/issues/339/events
|
https://github.com/ollama/ollama/issues/339
| 1,848,870,667
|
I_kwDOJ0Z1Ps5uM4cL
| 339
|
PARAMETER nombre_thread 8 not work
|
{
"login": "sagrabz",
"id": 4752912,
"node_id": "MDQ6VXNlcjQ3NTI5MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4752912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sagrabz",
"html_url": "https://github.com/sagrabz",
"followers_url": "https://api.github.com/users/sagrabz/followers",
"following_url": "https://api.github.com/users/sagrabz/following{/other_user}",
"gists_url": "https://api.github.com/users/sagrabz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sagrabz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sagrabz/subscriptions",
"organizations_url": "https://api.github.com/users/sagrabz/orgs",
"repos_url": "https://api.github.com/users/sagrabz/repos",
"events_url": "https://api.github.com/users/sagrabz/events{/privacy}",
"received_events_url": "https://api.github.com/users/sagrabz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-08-14T02:06:53
| 2023-08-21T21:27:06
| 2023-08-21T21:27:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
PARAMETER nombre_thread 8 not work on pc it use 100% CPU
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/339/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3675
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3675/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3675/comments
|
https://api.github.com/repos/ollama/ollama/issues/3675/events
|
https://github.com/ollama/ollama/issues/3675
| 2,246,488,175
|
I_kwDOJ0Z1Ps6F5rBv
| 3,675
|
Getting error while using ollama run on mac
|
{
"login": "swetavsavarn02",
"id": 166374157,
"node_id": "U_kgDOCeqrDQ",
"avatar_url": "https://avatars.githubusercontent.com/u/166374157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swetavsavarn02",
"html_url": "https://github.com/swetavsavarn02",
"followers_url": "https://api.github.com/users/swetavsavarn02/followers",
"following_url": "https://api.github.com/users/swetavsavarn02/following{/other_user}",
"gists_url": "https://api.github.com/users/swetavsavarn02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swetavsavarn02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swetavsavarn02/subscriptions",
"organizations_url": "https://api.github.com/users/swetavsavarn02/orgs",
"repos_url": "https://api.github.com/users/swetavsavarn02/repos",
"events_url": "https://api.github.com/users/swetavsavarn02/events{/privacy}",
"received_events_url": "https://api.github.com/users/swetavsavarn02/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-04-16T16:44:32
| 2024-04-25T16:57:15
| 2024-04-25T07:19:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Getting error while doing ollama run on mac on every model , Stating Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/26/2609048d349e7c70196401be59bea7eb89a968d4642e409b0e798b34403b96c8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240416%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240416T164135Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=98eed7690305ee8731cb04e894c1d549a8d9988e3c68091011ff61e67017752b": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
<img width="1440" alt="Screenshot 2024-04-16 at 10 13 49 PM" src="https://github.com/ollama/ollama/assets/166374157/4e58dc64-1b6a-4331-8def-382a47d1a4e4">
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
macOS
### Architecture
arm64
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "swetavsavarn02",
"id": 166374157,
"node_id": "U_kgDOCeqrDQ",
"avatar_url": "https://avatars.githubusercontent.com/u/166374157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swetavsavarn02",
"html_url": "https://github.com/swetavsavarn02",
"followers_url": "https://api.github.com/users/swetavsavarn02/followers",
"following_url": "https://api.github.com/users/swetavsavarn02/following{/other_user}",
"gists_url": "https://api.github.com/users/swetavsavarn02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swetavsavarn02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swetavsavarn02/subscriptions",
"organizations_url": "https://api.github.com/users/swetavsavarn02/orgs",
"repos_url": "https://api.github.com/users/swetavsavarn02/repos",
"events_url": "https://api.github.com/users/swetavsavarn02/events{/privacy}",
"received_events_url": "https://api.github.com/users/swetavsavarn02/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3675/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1776
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1776/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1776/comments
|
https://api.github.com/repos/ollama/ollama/issues/1776/events
|
https://github.com/ollama/ollama/pull/1776
| 2,064,644,127
|
PR_kwDOJ0Z1Ps5jK76C
| 1,776
|
Add ollama user to render group for Radeon support
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-03T21:03:32
| 2024-01-03T21:07:58
| 2024-01-03T21:07:54
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1776",
"html_url": "https://github.com/ollama/ollama/pull/1776",
"diff_url": "https://github.com/ollama/ollama/pull/1776.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1776.patch",
"merged_at": "2024-01-03T21:07:54"
}
|
For the ROCm libraries to access the driver, we need to add the ollama user to the render group.
Note: the script will need more work to fully support radeon cards, but this will at least make the installation functional. Without this change, the server sits in a crash-loop due to lack of permissions to access the driver.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1776/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2398
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2398/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2398/comments
|
https://api.github.com/repos/ollama/ollama/issues/2398/events
|
https://github.com/ollama/ollama/issues/2398
| 2,123,973,645
|
I_kwDOJ0Z1Ps5-mUQN
| 2,398
|
Running Ollama on mac but accessing through SSH only?
|
{
"login": "sei-dupdyke",
"id": 43444464,
"node_id": "MDQ6VXNlcjQzNDQ0NDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/43444464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sei-dupdyke",
"html_url": "https://github.com/sei-dupdyke",
"followers_url": "https://api.github.com/users/sei-dupdyke/followers",
"following_url": "https://api.github.com/users/sei-dupdyke/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-dupdyke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sei-dupdyke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-dupdyke/subscriptions",
"organizations_url": "https://api.github.com/users/sei-dupdyke/orgs",
"repos_url": "https://api.github.com/users/sei-dupdyke/repos",
"events_url": "https://api.github.com/users/sei-dupdyke/events{/privacy}",
"received_events_url": "https://api.github.com/users/sei-dupdyke/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-02-07T22:01:15
| 2024-07-25T06:36:09
| 2024-02-07T22:08:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can I run the app on an apple silicon based mac accessible via SSH only? After copying the installer out there, something like:
```bash
unzip Ollama-darwin.zip
mv Ollama.app /Applications/.
cd /Applications/.
chmod +x Ollama.app
open -n Ollama.app
```
but this gives no indication of changes, and when i subsequently run `ollama list` I get "zsh: command not found: ollama" (even with new shell, or login/out).
Is there a way to run it in this manner? Thanks!!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2398/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3241
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3241/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3241/comments
|
https://api.github.com/repos/ollama/ollama/issues/3241/events
|
https://github.com/ollama/ollama/pull/3241
| 2,194,453,780
|
PR_kwDOJ0Z1Ps5qD9Gj
| 3,241
|
update memory estimations for gpu offloading
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-19T09:30:00
| 2024-04-01T20:59:15
| 2024-04-01T20:59:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3241",
"html_url": "https://github.com/ollama/ollama/pull/3241",
"diff_url": "https://github.com/ollama/ollama/pull/3241.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3241.patch",
"merged_at": "2024-04-01T20:59:14"
}
|
take into account memory footprint of each layer
1. replace percentage overhead with static overhead of 377 MiB for cuda and rocm
2. add projector memory footprint to estimation
3. add layer footprint to estimation on a per-layer basis, including output layers
4. replace static kv memory footprint with pro-rated footprint based on how many layers to offload
5. set minimum context length for multimodal models to 2048
6. report memory requirements as structured logs
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3241/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2665
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2665/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2665/comments
|
https://api.github.com/repos/ollama/ollama/issues/2665/events
|
https://github.com/ollama/ollama/issues/2665
| 2,148,244,731
|
I_kwDOJ0Z1Ps6AC5z7
| 2,665
|
Microsoft Virus alert
|
{
"login": "nagkumar",
"id": 332234,
"node_id": "MDQ6VXNlcjMzMjIzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/332234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nagkumar",
"html_url": "https://github.com/nagkumar",
"followers_url": "https://api.github.com/users/nagkumar/followers",
"following_url": "https://api.github.com/users/nagkumar/following{/other_user}",
"gists_url": "https://api.github.com/users/nagkumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nagkumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nagkumar/subscriptions",
"organizations_url": "https://api.github.com/users/nagkumar/orgs",
"repos_url": "https://api.github.com/users/nagkumar/repos",
"events_url": "https://api.github.com/users/nagkumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nagkumar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-22T05:40:57
| 2024-02-22T06:41:35
| 2024-02-22T06:41:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
<img width="593" alt="image" src="https://github.com/ollama/ollama/assets/332234/53eb03aa-b3f8-48e2-8dff-62ce5a1b413b">
what is wrong with this message...any thing we need to worry about installing on windows?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2665/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4553
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4553/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4553/comments
|
https://api.github.com/repos/ollama/ollama/issues/4553/events
|
https://github.com/ollama/ollama/pull/4553
| 2,307,776,077
|
PR_kwDOJ0Z1Ps5wDjXN
| 4,553
|
Fix a typo in server/sched.go
|
{
"login": "coolljt0725",
"id": 8232360,
"node_id": "MDQ6VXNlcjgyMzIzNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8232360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolljt0725",
"html_url": "https://github.com/coolljt0725",
"followers_url": "https://api.github.com/users/coolljt0725/followers",
"following_url": "https://api.github.com/users/coolljt0725/following{/other_user}",
"gists_url": "https://api.github.com/users/coolljt0725/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coolljt0725/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coolljt0725/subscriptions",
"organizations_url": "https://api.github.com/users/coolljt0725/orgs",
"repos_url": "https://api.github.com/users/coolljt0725/repos",
"events_url": "https://api.github.com/users/coolljt0725/events{/privacy}",
"received_events_url": "https://api.github.com/users/coolljt0725/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-05-21T09:07:10
| 2024-05-21T20:39:25
| 2024-05-21T20:39:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4553",
"html_url": "https://github.com/ollama/ollama/pull/4553",
"diff_url": "https://github.com/ollama/ollama/pull/4553.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4553.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4553/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4548
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4548/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4548/comments
|
https://api.github.com/repos/ollama/ollama/issues/4548/events
|
https://github.com/ollama/ollama/issues/4548
| 2,307,020,600
|
I_kwDOJ0Z1Ps6Jglc4
| 4,548
|
Error: [0] server cpu not listed in available servers map[]
|
{
"login": "ZivenLu",
"id": 169518956,
"node_id": "U_kgDOChqnbA",
"avatar_url": "https://avatars.githubusercontent.com/u/169518956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZivenLu",
"html_url": "https://github.com/ZivenLu",
"followers_url": "https://api.github.com/users/ZivenLu/followers",
"following_url": "https://api.github.com/users/ZivenLu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZivenLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZivenLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZivenLu/subscriptions",
"organizations_url": "https://api.github.com/users/ZivenLu/orgs",
"repos_url": "https://api.github.com/users/ZivenLu/repos",
"events_url": "https://api.github.com/users/ZivenLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZivenLu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 19
| 2024-05-21T00:22:55
| 2024-07-15T07:50:17
| 2024-05-23T15:58:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I got the error message when I run the command: **ollama run llama3**. I use the windows OS and ollama service already up.
The error message:
**C:\Users\xxx>ollama run llama3
Error: [0] server cpu not listed in available servers map[]**
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.38
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4548/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4548/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5898
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5898/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5898/comments
|
https://api.github.com/repos/ollama/ollama/issues/5898/events
|
https://github.com/ollama/ollama/pull/5898
| 2,426,346,360
|
PR_kwDOJ0Z1Ps52RuFs
| 5,898
|
server: speed up single gguf creates
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-07-24T00:25:11
| 2024-08-12T16:28:57
| 2024-08-12T16:28:55
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5898",
"html_url": "https://github.com/ollama/ollama/pull/5898",
"diff_url": "https://github.com/ollama/ollama/pull/5898.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5898.patch",
"merged_at": "2024-08-12T16:28:55"
}
|
the blob for a file with a single gguf is already copied to the server on `/api/blobs/:digest`.
on `createModel`, we can avoid the rewriting of this blob.
This does not improve create speeds for safetensors or files with multiple gguf files
New logs
```
[GIN] 2024/07/23 - 17:21:45 | 201 | 5.298126708s | 127.0.0.1 | POST "/api/blobs/sha256:54696cbcadd1959275fc99f9cc67880d2f38419124da06cdf2140bad2dc3d94c"
[GIN] 2024/07/23 - 17:21:45 | 200 | 8.519125ms | 127.0.0.1 | POST "/api/create"
```
Old logs
```
[GIN] 2024/07/23 - 17:24:27 | 201 | 5.283626083s | 127.0.0.1 | POST "/api/blobs/sha256:54696cbcadd1959275fc99f9cc67880d2f38419124da06cdf2140bad2dc3d94c"
[GIN] 2024/07/23 - 17:24:32 | 200 | 4.302020959s | 127.0.0.1 | POST "/api/create"
```
resolves: https://github.com/ollama/ollama/issues/5388
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5898/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/198
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/198/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/198/comments
|
https://api.github.com/repos/ollama/ollama/issues/198/events
|
https://github.com/ollama/ollama/pull/198
| 1,818,971,585
|
PR_kwDOJ0Z1Ps5WQtKM
| 198
|
make response errors unique for error trace
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-24T19:04:37
| 2023-07-24T19:30:46
| 2023-07-24T19:21:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/198",
"html_url": "https://github.com/ollama/ollama/pull/198",
"diff_url": "https://github.com/ollama/ollama/pull/198.diff",
"patch_url": "https://github.com/ollama/ollama/pull/198.patch",
"merged_at": "2023-07-24T19:21:18"
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/198/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6809
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6809/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6809/comments
|
https://api.github.com/repos/ollama/ollama/issues/6809/events
|
https://github.com/ollama/ollama/issues/6809
| 2,526,690,909
|
I_kwDOJ0Z1Ps6Wmj5d
| 6,809
|
整理大量带中文标点的文本时,意外中断(重启解决了,可能是版本冲突)
|
{
"login": "running-frog",
"id": 64595207,
"node_id": "MDQ6VXNlcjY0NTk1MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/64595207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/running-frog",
"html_url": "https://github.com/running-frog",
"followers_url": "https://api.github.com/users/running-frog/followers",
"following_url": "https://api.github.com/users/running-frog/following{/other_user}",
"gists_url": "https://api.github.com/users/running-frog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/running-frog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/running-frog/subscriptions",
"organizations_url": "https://api.github.com/users/running-frog/orgs",
"repos_url": "https://api.github.com/users/running-frog/repos",
"events_url": "https://api.github.com/users/running-frog/events{/privacy}",
"received_events_url": "https://api.github.com/users/running-frog/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-15T01:44:00
| 2024-09-15T16:43:42
| 2024-09-15T16:43:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
已解决
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6809/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4216
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4216/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4216/comments
|
https://api.github.com/repos/ollama/ollama/issues/4216/events
|
https://github.com/ollama/ollama/issues/4216
| 2,281,997,630
|
I_kwDOJ0Z1Ps6IBIU-
| 4,216
|
Unable to load CUDA management library
|
{
"login": "ru4en",
"id": 60962448,
"node_id": "MDQ6VXNlcjYwOTYyNDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/60962448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ru4en",
"html_url": "https://github.com/ru4en",
"followers_url": "https://api.github.com/users/ru4en/followers",
"following_url": "https://api.github.com/users/ru4en/following{/other_user}",
"gists_url": "https://api.github.com/users/ru4en/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ru4en/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ru4en/subscriptions",
"organizations_url": "https://api.github.com/users/ru4en/orgs",
"repos_url": "https://api.github.com/users/ru4en/repos",
"events_url": "https://api.github.com/users/ru4en/events{/privacy}",
"received_events_url": "https://api.github.com/users/ru4en/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-05-07T00:20:50
| 2024-05-20T19:19:17
| 2024-05-07T16:43:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Not sure if this issue has been reported previously for Docker; however, it's similar to the issue reported here: https://github.com/ollama/ollama/issues/1895, which seemed to be closed now. Any help would be appricated :)
`Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.78: nvml vram init failure: 999`
```
docker container logs ab3d7ca114c0
time=2024-05-06T23:50:07.155Z level=INFO source=images.go:800 msg="total blobs: 22"
time=2024-05-06T23:50:07.157Z level=INFO source=images.go:807 msg="total unused blobs removed: 0"
time=2024-05-06T23:50:07.157Z level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.28)"
time=2024-05-06T23:50:07.157Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-05-06T23:50:09.242Z level=INFO source=payload_common.go:150 msg="Dynamic LLM libraries [rocm_v60000 cpu_avx cuda_v11 cpu_avx2 cpu]"
time=2024-05-06T23:50:09.242Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-05-06T23:50:09.242Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-05-06T23:50:09.247Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.78]"
time=2024-05-06T23:50:09.255Z level=INFO source=gpu.go:82 msg="Nvidia GPU detected"
time=2024-05-06T23:50:09.255Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-06T23:50:09.260Z level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6"
[GIN] 2024/05/06 - 23:50:10 | 200 | 31.428µs | 172.17.0.1 | HEAD "/"
[GIN] 2024/05/06 - 23:50:10 | 200 | 1.142676ms | 172.17.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:50:10 | 200 | 249.683µs | 172.17.0.1 | POST "/api/show"
time=2024-05-06T23:50:11.615Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-06T23:50:11.615Z level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6"
time=2024-05-06T23:50:11.615Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-06T23:50:11.615Z level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6"
time=2024-05-06T23:50:11.615Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-06T23:50:11.621Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /root/.ollama/assets/0.1.28/cuda_v11/libext_server.so"
time=2024-05-06T23:50:11.621Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-05-06T23:50:11.622Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /root/.ollama/assets/0.1.28/cuda_v11/libext_server.so Unable to init GPU: unknown error"
time=2024-05-06T23:50:11.623Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /root/.ollama/assets/0.1.28/cpu_avx2/libext_server.so"
time=2024-05-06T23:50:11.623Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: CPU buffer size = 4437.80 MiB
.......................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU input buffer size = 13.02 MiB
llama_new_context_with_model: CPU compute buffer size = 258.50 MiB
llama_new_context_with_model: graph splits (measure): 1
time=2024-05-06T23:50:15.703Z level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop"
[GIN] 2024/05/06 - 23:50:15 | 200 | 4.96996479s | 172.17.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:50:20 | 200 | 3.244657519s | 172.17.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:50:24 | 200 | 1.628777813s | 172.17.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:51:31 | 200 | 21.511µs | 172.17.0.1 | HEAD "/"
[GIN] 2024/05/06 - 23:51:31 | 200 | 476.248µs | 172.17.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:51:31 | 200 | 239.486µs | 172.17.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:51:31 | 200 | 225.936µs | 172.17.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:51:37 | 200 | 3.305428824s | 172.17.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:51:41 | 200 | 19.207µs | 172.17.0.1 | HEAD "/"
[GIN] 2024/05/06 - 23:51:41 | 200 | 297.872µs | 172.17.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:51:41 | 200 | 252.546µs | 172.17.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:51:41 | 200 | 173.765µs | 172.17.0.1 | POST "/api/chat"
loading library /root/.ollama/assets/0.1.28/cuda_v11/libext_server.so
loading library /root/.ollama/assets/0.1.28/cpu_avx2/libext_server.so
{"function":"initialize","level":"INFO","line":688,"msg":"initializing slots","n_slots":1,"tid":"140377447204416","timestamp":1715039415}
{"function":"initialize","id_slot":0,"level":"INFO","line":696,"msg":"new slot","n_ctx_slot":2048,"tid":"140377447204416","timestamp":1715039415}
{"function":"update_slots","level":"INFO","line":1593,"msg":"all slots are idle","tid":"140376599955008","timestamp":1715039415}
{"function":"launch_slot_with_data","id_slot":0,"id_task":0,"level":"INFO","line":970,"msg":"slot is processing task","tid":"140376599955008","timestamp":1715039417}
{"function":"update_slots","id_slot":0,"id_task":0,"level":"INFO","line":1841,"msg":"kv cache rm [p0, end)","p0":0,"tid":"140376599955008","timestamp":1715039417}
{"function":"update_slots","id_slot":0,"id_task":0,"level":"INFO","line":1567,"msg":"slot released","n_cache_tokens":30,"n_ctx":2048,"n_past":30,"n_system_tokens":0,"tid":"140376599955008","timestamp":1715039420,"truncated":false}
{"function":"update_slots","level":"INFO","line":1593,"msg":"all slots are idle","tid":"140376599955008","timestamp":1715039420}
{"function":"launch_slot_with_data","id_slot":0,"id_task":22,"level":"INFO","line":970,"msg":"slot is processing task","tid":"140376599955008","timestamp":1715039423}
{"function":"update_slots","id_slot":0,"id_task":22,"level":"INFO","line":1841,"msg":"kv cache rm [p0, end)","p0":10,"tid":"140376599955008","timestamp":1715039423}
{"function":"update_slots","id_slot":0,"id_task":22,"level":"INFO","line":1567,"msg":"slot released","n_cache_tokens":43,"n_ctx":2048,"n_past":43,"n_system_tokens":0,"tid":"140376599955008","timestamp":1715039424,"truncated":false}
{"function":"update_slots","level":"INFO","line":1593,"msg":"all slots are idle","tid":"140376599955008","timestamp":1715039424}
{"function":"launch_slot_with_data","id_slot":0,"id_task":26,"level":"INFO","line":970,"msg":"slot is processing task","tid":"140376599955008","timestamp":1715039494}
{"function":"update_slots","id_slot":0,"id_task":26,"level":"INFO","line":1841,"msg":"kv cache rm [p0, end)","p0":5,"tid":"140376599955008","timestamp":1715039494}
{"function":"update_slots","id_slot":0,"id_task":26,"level":"INFO","line":1567,"msg":"slot released","n_cache_tokens":33,"n_ctx":2048,"n_past":33,"n_system_tokens":0,"tid":"140376599955008","timestamp":1715039497,"truncated":false}
{"function":"update_slots","level":"INFO","line":1593,"msg":"all slots are idle","tid":"140376599955008","timestamp":1715039497}
{"function":"launch_slot_with_data","id_slot":0,"id_task":51,"level":"INFO","line":970,"msg":"slot is processing task","tid":"140376599955008","timestamp":1715039504}
{"function":"update_slots","id_slot":0,"id_task":51,"level":"INFO","line":1841,"msg":"kv cache rm [p0, end)","p0":5,"tid":"140376599955008","timestamp":1715039504}
{"function":"print_timings","id_slot":0,"id_task":51,"level":"INFO","line":304,"msg":"prompt eval time = 346.66 ms / 6 tokens ( 57.78 ms per token, 17.31 tokens per second)","n_prompt_tokens_processed":6,"n_tokens_second":17.30822486845749,"t_prompt_processing":346.656,"t_token":57.776,"tid":"140376599955008","timestamp":1715039511}
{"function":"print_timings","id_slot":0,"id_task":51,"level":"INFO","line":320,"msg":"generation eval time = 7210.41 ms / 52 runs ( 138.66 ms per token, 7.21 tokens per second)","n_decoded":52,"n_tokens_second":7.211794168182646,"t_token":138.66175,"t_token_generation":7210.411,"tid":"140376599955008","timestamp":1715039511}
{"function":"print_timings","id_slot":0,"id_task":51,"level":"INFO","line":331,"msg":" total time = 7557.07 ms","t_prompt_processing":346.656,"t_token_generation":7210.411,"t_total":7557.067,"tid":"140376599955008","timestamp":1715039511}
{"function":"update_slots","id_slot":0,"id_task":51,"level":"INFO","line":1567,"msg":"slot released","n_cache_tokens":62,"n_ctx":2048,"n_past":62,"n_system_tokens":0,"tid":"140376599955008","time[GIN] 2024/05/06 - 23:51:51 | 200 | 7.55845712s | 172.17.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:52:39 | 200 | 26.679µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/06 - 23:52:39 | 200 | 373.93µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:52:39 | 200 | 228.381µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:52:39 | 200 | 169.924µs | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:52:45 | 200 | 4.17958515s | 127.0.0.1 | POST "/api/chat"
time=2024-05-06T23:54:55.278Z level=INFO source=images.go:800 msg="total blobs: 22"
time=2024-05-06T23:54:55.279Z level=INFO source=images.go:807 msg="total unused blobs removed: 0"
time=2024-05-06T23:54:55.280Z level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.28)"
time=2024-05-06T23:54:55.280Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-05-06T23:54:57.279Z level=INFO source=payload_common.go:150 msg="Dynamic LLM libraries [rocm_v60000 cpu_avx cpu cuda_v11 cpu_avx2]"
time=2024-05-06T23:54:57.280Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-05-06T23:54:57.280Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-05-06T23:54:57.287Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.78]"
time=2024-05-06T23:54:57.296Z level=INFO source=gpu.go:249 msg="Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.78: nvml vram init failure: 999"
time=2024-05-06T23:54:57.296Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-06T23:54:57.296Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-06T23:54:57.297Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions []"
time=2024-05-06T23:54:57.297Z level=INFO source=amd_linux.go:92 msg="all detected amdgpus are skipped, falling back to CPU"
time=2024-05-06T23:54:57.297Z level=INFO source=routes.go:1042 msg="no GPU detected"
[GIN] 2024/05/06 - 23:54:57 | 200 | 30.102µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/06 - 23:54:57 | 200 | 524.509µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:54:57 | 200 | 236.623µs | 127.0.0.1 | POST "/api/show"
time=2024-05-06T23:54:58.301Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-06T23:54:58.302Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-06T23:54:58.302Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions []"
time=2024-05-06T23:54:58.302Z level=INFO source=amd_linux.go:92 msg="all detected amdgpus are skipped, falling back to CPU"
time=2024-05-06T23:54:58.302Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-06T23:54:58.302Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-06T23:54:58.302Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions []"
time=2024-05-06T23:54:58.302Z level=INFO source=amd_linux.go:92 msg="all detected amdgpus are skipped, falling back to CPU"
time=2024-05-06T23:54:58.302Z level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
time=2024-05-06T23:54:58.303Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /root/.ollama/assets/0.1.28/cpu_avx2/libext_server.so"
time=2024-05-06T23:54:58.303Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: CPU buffer size = 4437.80 MiB
.......................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU input buffer size = 13.02 MiB
llama_new_context_with_model: CPU compute buffer size = 258.50 MiB
llama_new_context_with_model: graph splits (measure): 1
time=2024-05-06T23:54:59.520Z level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop"
[GIN] 2024/05/06 - 23:54:59 | 200 | 2.093694988s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:55:04 | 200 | 3.926582976s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:55:19 | 200 | 11.588269133s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:56:22 | 200 | 19.416µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/06 - 23:56:22 | 200 | 1.543284ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:56:22 | 200 | 244.235µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:56:22 | 200 | 161.334µs | 127.0.0.1 | POST "/api/chat"
loading library /root/.ollama/assets/0.1.28/cpu_avx2/libext_server.so
{"function":"initialize","level":"INFO","line":688,"msg":"initializing slots","n_slots":1,"tid":"140312867505728","timestamp":1715039699}
{"function":"initialize","id_slot":0,"level":"INFO","line":696,"msg":"new slot","n_ctx_slot":2048,"tid":"140312867505728","timestamp":1715039699}
{"function":"update_slots","level":"INFO","line":1593,"msg":"all slots are idle","tid":"140312150279744","timestamp":1715039699}
{"function":"launch_slot_with_data","id_slot":0,"id_task":0,"level":"INFO","line":970,"msg":"slot is processing task","tid":"140312150279744","timestamp":1715039700}
{"function":"update_slots","id_slot":0,"id_task":0,"level":"INFO","line":1841,"msg":"kv cache rm [p0, end)","p0":0,"tid":"140312150279744","timestamp":1715039700}
{"function":"print_timings","id_slot":0,"id_task":0,"level":"INFO","line":304,"msg":"prompt eval time = 629.32 ms / 11 tokens ( 57.21 ms per token, 17.48 tokens per second)","n_prompt_tokens_processed":11,"n_tokens_second":17.47907278285907,"t_prompt_processing":629.324,"t_token":57.21127272727272,"tid":"140312150279744","timestamp":1715039704}
{"function":"print_timings","id_slot":0,"id_task":0,"level":"INFO","line":320,"msg":"generation eval time = 3294.66 ms / 25 runs ( 131.79 ms per token, 7.59 tokens per second)","n_decoded":25,"n_tokens_second":7.588043307694853,"t_token":131.78628,"t_token_generation":3294.657,"tid":"140312150279744","timestamp":1715039704}
{"function":"print_timings","id_slot":0,"id_task":0,"level":"INFO","line":331,"msg":" total time = 3923.98 ms","t_prompt_processing":629.324,"t_token_generation":3294.657,"t_total":3923.981,"tid":"140312150279744","timestamp":1715039704}
{"function":"update_slots","id_slot":0,"id_task":0,"level":"INFO","line":1567,"msg":"slot released","n_cache_tokens":35,"n_ctx":2048,"n_past":35,"n_system_tokens":0,"tid":"140312150279744","timestamp":1715039704,"truncated":false}
{"function":"update_slots","level":"INFO","line":1593,"msg":"all slots are idle","tid":"140312150279744","timestamp":1715039704}
{"function":"update_slots","level":"INFO","line":1593,"msg":"all slots are idle","tid":"140312150279744","timestamp":1715039704}
{"function":"launch_slot_with_data","id_slot":0,"id_task":27,"level":"INFO","line":970,"msg":"slot is processing task","tid":"140312150279744","timestamp":1715039708}
{"function":"update_slots","id_slot":0,"id_task":27,"level":"INFO","line":1841,"msg":"kv cache rm [p0, end)","p0":10,"tid":"140312150279744","timestamp":1715039708}
{"function":"update_slots","id_slot":0,"id_task":27,"level":"INFO","line":1567,"msg":"slot released","n_cache_tokens":112,"n_ctx":2048,"n_past":112,"n_system_tokens":0,"tid":"140312150279744","timestamp":1715039720,"truncated":false}
{"function":"update_slots","level":"INFO","line":1593,"msg":"all slots are idle","tid":"140312150279744","timestamp":1715039720}
{"function":"launch_slot_with_data","id_slot":0,"id_task":94,"level":"INFO","line":970,"msg":"slot is processing task","tid":"140312150279744","timestamp":1715039783}
{"function":"update_slots","id_slot":0,"id_task":94,"level":"INFO","line":1841,"msg":"kv cache rm [p0, end)","p0":10,"tid":"140312150279744","timestamp":1715039783}
{"function":"print_timings","id_slot":0,"id_task":94,"level":"INFO","line":304,"msg":"prompt eval time = 139.78 ms / 1 tokens ( 139.78 ms per token, 7.15 tokens per second)","n_prompt_tokens_processed":1,"n_tokens_second":7.154048118127642,"t_prompt_processing":139.781,"t_token":139.781,"tid":"140312150279744","timestamp":1715039787}
{"function":"print_timings","id_slot":0,"id_task":94,"level":"INFO","line":320,"msg":"generation eval time = 3497.36 ms / 25 runs ( 139.89 ms per token, 7.15 tokens per second)","n_decoded":25,"n_tokens_second":7.148259184440812,"t_token":139.8942,"t_token_generation":3497.355,"tid":"140312150279744","timestamp":1715039787}
{"function":"print_timings","id_slot":0,"id_task":94,"level":"INFO","line":331,"msg":" total time = 3637.14 ms",[GIN] 2024/05/06 - 23:56:27 | 200 | 3.638244055s | 127.0.0.1 | POST "/api/chat"
time=2024-05-06T23:58:08.429Z level=INFO source=images.go:800 msg="total blobs: 22"
time=2024-05-06T23:58:08.430Z level=INFO source=images.go:807 msg="total unused blobs removed: 0"
time=2024-05-06T23:58:08.431Z level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.28)"
time=2024-05-06T23:58:08.431Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-05-06T23:58:10.587Z level=INFO source=payload_common.go:150 msg="Dynamic LLM libraries [cpu cuda_v11 cpu_avx2 cpu_avx rocm_v60000]"
time=2024-05-06T23:58:10.589Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-05-06T23:58:10.589Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-05-06T23:58:10.599Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.78]"
time=2024-05-06T23:58:10.612Z level=INFO source=gpu.go:249 msg="Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.78: nvml vram init failure: 999"
time=2024-05-06T23:58:10.612Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-06T23:58:10.612Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-06T23:58:10.613Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions []"
time=2024-05-06T23:58:10.613Z level=INFO source=amd_linux.go:92 msg="all detected amdgpus are skipped, falling back to CPU"
time=2024-05-06T23:58:10.613Z level=INFO source=routes.go:1042 msg="no GPU detected"
[GIN] 2024/05/06 - 23:58:10 | 200 | 38.552µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/06 - 23:58:10 | 200 | 5.985963ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/06 - 23:58:10 | 200 | 273.639µs | 127.0.0.1 | POST "/api/show"
time=2024-05-06T23:58:11.539Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-06T23:58:11.539Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-06T23:58:11.539Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions []"
time=2024-05-06T23:58:11.539Z level=INFO source=amd_linux.go:92 msg="all detected amdgpus are skipped, falling back to CPU"
time=2024-05-06T23:58:11.539Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-06T23:58:11.539Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-06T23:58:11.539Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions []"
time=2024-05-06T23:58:11.539Z level=INFO source=amd_linux.go:92 msg="all detected amdgpus are skipped, falling back to CPU"
time=2024-05-06T23:58:11.539Z level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
time=2024-05-06T23:58:11.540Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /root/.ollama/assets/0.1.28/cpu_avx2/libext_server.so"
time=2024-05-06T23:58:11.540Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: CPU buffer size = 4437.80 MiB
.......................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU input buffer size = 13.02 MiB
llama_new_context_with_model: CPU compute buffer size = 258.50 MiB
llama_new_context_with_model: graph splits (measure): 1
time=2024-05-06T23:58:15.475Z level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop"
[GIN] 2024/05/06 - 23:58:15 | 200 | 4.854954195s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/06 - 23:58:20 | 200 | 4.013798182s | 127.0.0.1 | POST "/api/chat"
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.33
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4216/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3420
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3420/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3420/comments
|
https://api.github.com/repos/ollama/ollama/issues/3420/events
|
https://github.com/ollama/ollama/pull/3420
| 2,216,706,201
|
PR_kwDOJ0Z1Ps5rPIJ2
| 3,420
|
Update langchain imports
|
{
"login": "kungfu-eric",
"id": 87145506,
"node_id": "MDQ6VXNlcjg3MTQ1NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/87145506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kungfu-eric",
"html_url": "https://github.com/kungfu-eric",
"followers_url": "https://api.github.com/users/kungfu-eric/followers",
"following_url": "https://api.github.com/users/kungfu-eric/following{/other_user}",
"gists_url": "https://api.github.com/users/kungfu-eric/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kungfu-eric/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kungfu-eric/subscriptions",
"organizations_url": "https://api.github.com/users/kungfu-eric/orgs",
"repos_url": "https://api.github.com/users/kungfu-eric/repos",
"events_url": "https://api.github.com/users/kungfu-eric/events{/privacy}",
"received_events_url": "https://api.github.com/users/kungfu-eric/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-30T23:50:58
| 2024-11-21T09:28:54
| 2024-11-21T09:28:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3420",
"html_url": "https://github.com/ollama/ollama/pull/3420",
"diff_url": "https://github.com/ollama/ollama/pull/3420.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3420.patch",
"merged_at": null
}
|
Langchain has moved their components around
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3420/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4140
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4140/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4140/comments
|
https://api.github.com/repos/ollama/ollama/issues/4140/events
|
https://github.com/ollama/ollama/issues/4140
| 2,278,424,639
|
I_kwDOJ0Z1Ps6HzgA_
| 4,140
|
Add binary support for Nvidia Jetson Nano- JetPack 4
|
{
"login": "dtischler",
"id": 18220601,
"node_id": "MDQ6VXNlcjE4MjIwNjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/18220601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dtischler",
"html_url": "https://github.com/dtischler",
"followers_url": "https://api.github.com/users/dtischler/followers",
"following_url": "https://api.github.com/users/dtischler/following{/other_user}",
"gists_url": "https://api.github.com/users/dtischler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dtischler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dtischler/subscriptions",
"organizations_url": "https://api.github.com/users/dtischler/orgs",
"repos_url": "https://api.github.com/users/dtischler/repos",
"events_url": "https://api.github.com/users/dtischler/events{/privacy}",
"received_events_url": "https://api.github.com/users/dtischler/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-05-03T21:06:53
| 2024-07-25T19:55:39
| 2024-07-25T16:41:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi folks, I have been experimenting with attempting to get GPU acceleration working on the older (but still nice!) Jetson Nano Developer Kit hardware with Ollama. I have both the 2gb and 4gb RAM versions. I've been unable to get it working, due to this situation:
Nvidia only provides JetPack 4.6 for these devices, which is based on Ubuntu 18.04, and contains built-in CUDA 10.2. The GPU is an old Maxwell generation, `sm_53`, which needs to be added to `gen_common.sh`, no problem there.
I install Go 1.22, and Cmake 2.29 successfully, and add to `.profile` - All good there too.
However, the challenge becomes `gcc`: Ollama (and llama.cpp from what I gather) require gcc-11, however, CUDA 10.2 is not supported past gcc-8. The JetPack / Ubuntu distribution includes gcc-7.5 out-of-the-box. So attempting to build Ollama from source (because I need to add `sm_53` in gen_common.sh) results in errors, and fails of course. I can upgrade to gcc-11 rather easily, and then a build will actually complete, but CUDA and the GPU are not usable, and it falls back to CPU inferencing.
Any ideas on how to overcome the chicken and egg scenario, or, build *only* the actual CUDA bits with gcc-7.5 but the rest of Ollama/Llama.cpp with gcc-11 ?
I actually tried injecting `update-alternatives --set gcc /usr/bin/gcc-7` into `gen_linux.sh` just before the CUDA bits, but that didn't work as the build is looping through components and some of the next bits need to go back to gcc-11 😄
Thanks!
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4140/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4140/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2865
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2865/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2865/comments
|
https://api.github.com/repos/ollama/ollama/issues/2865/events
|
https://github.com/ollama/ollama/issues/2865
| 2,163,507,461
|
I_kwDOJ0Z1Ps6A9IEF
| 2,865
|
Privacy settings on models
|
{
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/users/trymeouteh/followers",
"following_url": "https://api.github.com/users/trymeouteh/following{/other_user}",
"gists_url": "https://api.github.com/users/trymeouteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trymeouteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trymeouteh/subscriptions",
"organizations_url": "https://api.github.com/users/trymeouteh/orgs",
"repos_url": "https://api.github.com/users/trymeouteh/repos",
"events_url": "https://api.github.com/users/trymeouteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/trymeouteh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-01T14:15:35
| 2025-01-28T18:18:29
| 2024-03-04T08:31:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
The ability to customise the privacy settings for each model. These privacy setting can restrict what the model can do with your device...
- Access the internet. You can disable a model from accessing the internet, making sure it only runs offline
- Reading files (Cannot read files on your system)
- Writing files (Cannot create files on your system)
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2865/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/760
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/760/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/760/comments
|
https://api.github.com/repos/ollama/ollama/issues/760/events
|
https://github.com/ollama/ollama/pull/760
| 1,938,711,185
|
PR_kwDOJ0Z1Ps5ckNoD
| 760
|
Mxyng/more downloads
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-11T20:51:53
| 2023-10-11T21:33:12
| 2023-10-11T21:33:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/760",
"html_url": "https://github.com/ollama/ollama/pull/760",
"diff_url": "https://github.com/ollama/ollama/pull/760.diff",
"patch_url": "https://github.com/ollama/ollama/pull/760.patch",
"merged_at": "2023-10-11T21:33:10"
}
|
minor tweaks to downloads
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/760/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3740
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3740/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3740/comments
|
https://api.github.com/repos/ollama/ollama/issues/3740/events
|
https://github.com/ollama/ollama/issues/3740
| 2,251,849,784
|
I_kwDOJ0Z1Ps6GOIA4
| 3,740
|
ollama serve yields "'model' not found" error because of incorrect default OLLAMA_MODELS value on Linux
|
{
"login": "cedricvidal",
"id": 33618,
"node_id": "MDQ6VXNlcjMzNjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/33618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cedricvidal",
"html_url": "https://github.com/cedricvidal",
"followers_url": "https://api.github.com/users/cedricvidal/followers",
"following_url": "https://api.github.com/users/cedricvidal/following{/other_user}",
"gists_url": "https://api.github.com/users/cedricvidal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cedricvidal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cedricvidal/subscriptions",
"organizations_url": "https://api.github.com/users/cedricvidal/orgs",
"repos_url": "https://api.github.com/users/cedricvidal/repos",
"events_url": "https://api.github.com/users/cedricvidal/events{/privacy}",
"received_events_url": "https://api.github.com/users/cedricvidal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-04-19T00:42:44
| 2024-11-22T07:49:08
| 2024-05-14T23:41:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
On Linux, the `pull` and `run` commands use the `/usr/share/ollama/.ollama/models` models folder but `serve` uses the `OLLAMA_MODELS` env var which defaults to `~/.ollama/models` as per the help:
```
$ ollama serve --help
OLLAMA_MODELS The path to the models directory (default is "~/.ollama/models")
```
By default, the `serve` command will therefore not see any models and will yield an error such as the following:
```
'llama3:latest' not found, try pulling it first
```
A workaround is to set the `OLLAMA_MODELS` to point to the `/usr/share/ollama/.ollama/models` directory like so:
```
export OLLAMA_MODELS=/usr/share/ollama/.ollama/models
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3740/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6868
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6868/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6868/comments
|
https://api.github.com/repos/ollama/ollama/issues/6868/events
|
https://github.com/ollama/ollama/issues/6868
| 2,535,155,178
|
I_kwDOJ0Z1Ps6XG2Xq
| 6,868
|
API request with Chinese characters in prompt not correctly be received...
|
{
"login": "ingted",
"id": 4289161,
"node_id": "MDQ6VXNlcjQyODkxNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4289161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ingted",
"html_url": "https://github.com/ingted",
"followers_url": "https://api.github.com/users/ingted/followers",
"following_url": "https://api.github.com/users/ingted/following{/other_user}",
"gists_url": "https://api.github.com/users/ingted/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ingted/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ingted/subscriptions",
"organizations_url": "https://api.github.com/users/ingted/orgs",
"repos_url": "https://api.github.com/users/ingted/repos",
"events_url": "https://api.github.com/users/ingted/events{/privacy}",
"received_events_url": "https://api.github.com/users/ingted/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-09-19T02:47:21
| 2024-09-25T01:48:00
| 2024-09-25T01:47:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Same issue as [API request with Chinese characters in prompt](https://github.com/ollama/ollama/issues/3793)
```
$url = 'http://localhost:11434/api/generate'
$body = @{
model = "llama3-zh:latest"
prompt = "一加一是多少"
format = "json"
stream = $false
} | ConvertTo-Json
$headers = @{
"Content-Type" = "application/json"
"Charset" = "utf-8"
}
$response = Invoke-WebRequest -Uri $url -Method Post -Body $body -Headers $headers
$response.Content
```
Response:
```
{"model":"llama3-zh:latest","created_at":"2024-09-19T02:35:55.596601Z","response":"{\"query\": \"What is the best way to learn a new language?\", \"context\": {\"language\": \"Spanish\", \"level\": \"beginner\"}}","done":true,"done_reason":"stop","context":[
319,27,91,318,5011,91,29,882,319,27708,7801,27,91,318,6345,91,1459,27,91,318,5011,91,29,78191,319,5018,1663,794,330,3923,374,279,1888,1648,311,4048,264,502,4221,32111,330,2196,794,5324,11789,794,330,62897,498,330,3374,794,330,7413,1215,32075],"total_duration
":665925000,"load_duration":54531900,"prompt_eval_count":25,"prompt_eval_duration":63678000,"eval_count":32,"eval_duration":544130000}
```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.11
|
{
"login": "ingted",
"id": 4289161,
"node_id": "MDQ6VXNlcjQyODkxNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4289161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ingted",
"html_url": "https://github.com/ingted",
"followers_url": "https://api.github.com/users/ingted/followers",
"following_url": "https://api.github.com/users/ingted/following{/other_user}",
"gists_url": "https://api.github.com/users/ingted/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ingted/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ingted/subscriptions",
"organizations_url": "https://api.github.com/users/ingted/orgs",
"repos_url": "https://api.github.com/users/ingted/repos",
"events_url": "https://api.github.com/users/ingted/events{/privacy}",
"received_events_url": "https://api.github.com/users/ingted/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6868/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5121
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5121/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5121/comments
|
https://api.github.com/repos/ollama/ollama/issues/5121/events
|
https://github.com/ollama/ollama/pull/5121
| 2,360,637,476
|
PR_kwDOJ0Z1Ps5y3qy1
| 5,121
|
deepseek v2 graph
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-06-18T20:18:43
| 2024-06-19T02:35:00
| 2024-06-18T23:30:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5121",
"html_url": "https://github.com/ollama/ollama/pull/5121",
"diff_url": "https://github.com/ollama/ollama/pull/5121.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5121.patch",
"merged_at": "2024-06-18T23:30:58"
}
|
Fixes #5113
Fixes #4799
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5121/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6718
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6718/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6718/comments
|
https://api.github.com/repos/ollama/ollama/issues/6718/events
|
https://github.com/ollama/ollama/pull/6718
| 2,515,039,893
|
PR_kwDOJ0Z1Ps566MpT
| 6,718
|
docs: update llama3 to llama3.1
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-09T22:31:50
| 2024-09-10T05:47:18
| 2024-09-10T05:47:16
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6718",
"html_url": "https://github.com/ollama/ollama/pull/6718",
"diff_url": "https://github.com/ollama/ollama/pull/6718.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6718.patch",
"merged_at": "2024-09-10T05:47:16"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6718/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1215
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1215/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1215/comments
|
https://api.github.com/repos/ollama/ollama/issues/1215/events
|
https://github.com/ollama/ollama/pull/1215
| 2,003,283,928
|
PR_kwDOJ0Z1Ps5f982e
| 1,215
|
use a pulsating spinner
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-21T01:28:55
| 2023-11-30T21:35:15
| 2023-11-30T21:35:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1215",
"html_url": "https://github.com/ollama/ollama/pull/1215",
"diff_url": "https://github.com/ollama/ollama/pull/1215.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1215.patch",
"merged_at": null
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1215/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6384
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6384/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6384/comments
|
https://api.github.com/repos/ollama/ollama/issues/6384/events
|
https://github.com/ollama/ollama/issues/6384
| 2,469,335,465
|
I_kwDOJ0Z1Ps6TLxGp
| 6,384
|
Open WebUI: Server Connection Error
|
{
"login": "ChaoYue97",
"id": 63837469,
"node_id": "MDQ6VXNlcjYzODM3NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/63837469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChaoYue97",
"html_url": "https://github.com/ChaoYue97",
"followers_url": "https://api.github.com/users/ChaoYue97/followers",
"following_url": "https://api.github.com/users/ChaoYue97/following{/other_user}",
"gists_url": "https://api.github.com/users/ChaoYue97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChaoYue97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChaoYue97/subscriptions",
"organizations_url": "https://api.github.com/users/ChaoYue97/orgs",
"repos_url": "https://api.github.com/users/ChaoYue97/repos",
"events_url": "https://api.github.com/users/ChaoYue97/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChaoYue97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-16T02:58:40
| 2024-11-20T03:36:06
| 2024-11-20T03:36:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I try to download a model through the WebUI, I encounter an error: Open WebUI: Server Connection Error, and I get a notification that the download has been canceled.

### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.6
|
{
"login": "ChaoYue97",
"id": 63837469,
"node_id": "MDQ6VXNlcjYzODM3NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/63837469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChaoYue97",
"html_url": "https://github.com/ChaoYue97",
"followers_url": "https://api.github.com/users/ChaoYue97/followers",
"following_url": "https://api.github.com/users/ChaoYue97/following{/other_user}",
"gists_url": "https://api.github.com/users/ChaoYue97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChaoYue97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChaoYue97/subscriptions",
"organizations_url": "https://api.github.com/users/ChaoYue97/orgs",
"repos_url": "https://api.github.com/users/ChaoYue97/repos",
"events_url": "https://api.github.com/users/ChaoYue97/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChaoYue97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6384/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4607
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4607/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4607/comments
|
https://api.github.com/repos/ollama/ollama/issues/4607/events
|
https://github.com/ollama/ollama/issues/4607
| 2,314,538,804
|
I_kwDOJ0Z1Ps6J9Q80
| 4,607
|
/THUDM/CogVLM2
|
{
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enryteam/followers",
"following_url": "https://api.github.com/users/enryteam/following{/other_user}",
"gists_url": "https://api.github.com/users/enryteam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enryteam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enryteam/subscriptions",
"organizations_url": "https://api.github.com/users/enryteam/orgs",
"repos_url": "https://api.github.com/users/enryteam/repos",
"events_url": "https://api.github.com/users/enryteam/events{/privacy}",
"received_events_url": "https://api.github.com/users/enryteam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 3
| 2024-05-24T06:36:55
| 2024-06-13T18:06:59
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/THUDM/CogVLM2
thanks!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4607/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4607/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3121
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3121/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3121/comments
|
https://api.github.com/repos/ollama/ollama/issues/3121/events
|
https://github.com/ollama/ollama/pull/3121
| 2,184,672,580
|
PR_kwDOJ0Z1Ps5pirfr
| 3,121
|
simplify parsing safetensor file
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-03-13T18:44:49
| 2024-06-05T20:12:03
| 2024-04-10T19:45:11
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3121",
"html_url": "https://github.com/ollama/ollama/pull/3121",
"diff_url": "https://github.com/ollama/ollama/pull/3121.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3121.patch",
"merged_at": null
}
|
remove the indirection in parsing the safetensor head json
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3121/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3121/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3873
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3873/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3873/comments
|
https://api.github.com/repos/ollama/ollama/issues/3873/events
|
https://github.com/ollama/ollama/issues/3873
| 2,260,981,795
|
I_kwDOJ0Z1Ps6Gw9gj
| 3,873
|
Suggestion for continuous batching
|
{
"login": "baptistejamin",
"id": 866499,
"node_id": "MDQ6VXNlcjg2NjQ5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/866499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baptistejamin",
"html_url": "https://github.com/baptistejamin",
"followers_url": "https://api.github.com/users/baptistejamin/followers",
"following_url": "https://api.github.com/users/baptistejamin/following{/other_user}",
"gists_url": "https://api.github.com/users/baptistejamin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baptistejamin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baptistejamin/subscriptions",
"organizations_url": "https://api.github.com/users/baptistejamin/orgs",
"repos_url": "https://api.github.com/users/baptistejamin/repos",
"events_url": "https://api.github.com/users/baptistejamin/events{/privacy}",
"received_events_url": "https://api.github.com/users/baptistejamin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-24T10:40:53
| 2024-05-01T22:27:05
| 2024-05-01T22:27:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/ollama/ollama/blob/74d2a9ef9aa6a4ee31f027926f3985c9e1610346/llm/server.go#L197C6-L197C21
It appears Continuous batching can work but ctx-size shall be updated to NUM_PARALLEL * ctx-size https://github.com/ggerganov/llama.cpp/discussions/4130#discussioncomment-8053636
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3873/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6628
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6628/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6628/comments
|
https://api.github.com/repos/ollama/ollama/issues/6628/events
|
https://github.com/ollama/ollama/issues/6628
| 2,504,496,397
|
I_kwDOJ0Z1Ps6VR5UN
| 6,628
|
no space left on device - ubuntu
|
{
"login": "fahadshery",
"id": 7301267,
"node_id": "MDQ6VXNlcjczMDEyNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7301267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fahadshery",
"html_url": "https://github.com/fahadshery",
"followers_url": "https://api.github.com/users/fahadshery/followers",
"following_url": "https://api.github.com/users/fahadshery/following{/other_user}",
"gists_url": "https://api.github.com/users/fahadshery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fahadshery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fahadshery/subscriptions",
"organizations_url": "https://api.github.com/users/fahadshery/orgs",
"repos_url": "https://api.github.com/users/fahadshery/repos",
"events_url": "https://api.github.com/users/fahadshery/events{/privacy}",
"received_events_url": "https://api.github.com/users/fahadshery/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-09-04T07:12:43
| 2024-09-05T06:03:07
| 2024-09-04T12:47:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am getting the following error:
```
root@ollama:/home/user/hugging-face-models# ollama create Lily-Cybersecurity-7B-v0.2
transferring model data 100%
converting model
Error: write /usr/share/ollama/.ollama/models/blobs/3649787503/fp164117961266: no space left on device
```
but I do have space?
```
root@ollama:/home/user/hugging-face-models# df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.8G 2.6M 4.7G 1% /run
efivarfs 256K 123K 129K 49% /sys/firmware/efi/efivars
/dev/mapper/ubuntu--vg-ubuntu--lv 274G 199G 64G 76% /
tmpfs 30G 0 30G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 2.0G 183M 1.7G 10% /boot
/dev/sda1 1.1G 6.2M 1.1G 1% /boot/efi
overlay 274G 199G 64G 76% /var/lib/docker/overlay2/dda534be667e34c729e88f904317afb8564a19c9b619e0c0ec6381127d996900/merged
```
how do I resolve it?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6628/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3416
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3416/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3416/comments
|
https://api.github.com/repos/ollama/ollama/issues/3416/events
|
https://github.com/ollama/ollama/issues/3416
| 2,216,440,352
|
I_kwDOJ0Z1Ps6EHDIg
| 3,416
|
mxbai-embed-large模型使用时向量数据丢失
|
{
"login": "taurusduan",
"id": 30854760,
"node_id": "MDQ6VXNlcjMwODU0NzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/30854760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taurusduan",
"html_url": "https://github.com/taurusduan",
"followers_url": "https://api.github.com/users/taurusduan/followers",
"following_url": "https://api.github.com/users/taurusduan/following{/other_user}",
"gists_url": "https://api.github.com/users/taurusduan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taurusduan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taurusduan/subscriptions",
"organizations_url": "https://api.github.com/users/taurusduan/orgs",
"repos_url": "https://api.github.com/users/taurusduan/repos",
"events_url": "https://api.github.com/users/taurusduan/events{/privacy}",
"received_events_url": "https://api.github.com/users/taurusduan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-03-30T13:13:03
| 2024-04-19T15:41:35
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
向量数据丢失
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
amd64
### Platform
Docker
### Ollama version
0.1.30
### GPU
Nvidia
### GPU info
_No response_
### CPU
Intel
### Other software

生成的数据组丢失严重,5000组数据最后只有1000左右。已经使用其他做过对比,其他模型可以保证正常无误。
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3416/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3239
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3239/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3239/comments
|
https://api.github.com/repos/ollama/ollama/issues/3239/events
|
https://github.com/ollama/ollama/issues/3239
| 2,194,286,990
|
I_kwDOJ0Z1Ps6CyimO
| 3,239
|
Vercel AI SDK with Ollama - not for production
|
{
"login": "jakobhoeg",
"id": 114422072,
"node_id": "U_kgDOBtHxOA",
"avatar_url": "https://avatars.githubusercontent.com/u/114422072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakobhoeg",
"html_url": "https://github.com/jakobhoeg",
"followers_url": "https://api.github.com/users/jakobhoeg/followers",
"following_url": "https://api.github.com/users/jakobhoeg/following{/other_user}",
"gists_url": "https://api.github.com/users/jakobhoeg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakobhoeg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakobhoeg/subscriptions",
"organizations_url": "https://api.github.com/users/jakobhoeg/orgs",
"repos_url": "https://api.github.com/users/jakobhoeg/repos",
"events_url": "https://api.github.com/users/jakobhoeg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakobhoeg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 4
| 2024-03-19T08:02:15
| 2024-12-19T21:39:31
| 2024-12-19T21:39:31
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The [blog post](https://ollama.com/blog/openai-compatibility) about how Vercel AI SDK can be integrated with Ollama should be updated to include that it's NOT for production and only works locally.
Calls from NextJS api routes can't be proxied to localhost. When this route is called on a hosted instance, it runs on the server and can't access the Ollama running on a local machine.
### What did you expect to see?
I expected to be able to host it through Vercel and still be able to hit my locally running Ollama. It doesn't state that this isn't possible and since the Vercel AI SDK is there, it implies for it to work.
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
Vercel
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3239/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5785
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5785/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5785/comments
|
https://api.github.com/repos/ollama/ollama/issues/5785/events
|
https://github.com/ollama/ollama/issues/5785
| 2,417,830,869
|
I_kwDOJ0Z1Ps6QHSvV
| 5,785
|
add GraphRAG
|
{
"login": "tqangxl",
"id": 9669944,
"node_id": "MDQ6VXNlcjk2Njk5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9669944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqangxl",
"html_url": "https://github.com/tqangxl",
"followers_url": "https://api.github.com/users/tqangxl/followers",
"following_url": "https://api.github.com/users/tqangxl/following{/other_user}",
"gists_url": "https://api.github.com/users/tqangxl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tqangxl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tqangxl/subscriptions",
"organizations_url": "https://api.github.com/users/tqangxl/orgs",
"repos_url": "https://api.github.com/users/tqangxl/repos",
"events_url": "https://api.github.com/users/tqangxl/events{/privacy}",
"received_events_url": "https://api.github.com/users/tqangxl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-19T04:09:18
| 2024-08-13T01:16:09
| 2024-08-13T01:16:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
add GraphRAG
|
{
"login": "tqangxl",
"id": 9669944,
"node_id": "MDQ6VXNlcjk2Njk5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9669944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqangxl",
"html_url": "https://github.com/tqangxl",
"followers_url": "https://api.github.com/users/tqangxl/followers",
"following_url": "https://api.github.com/users/tqangxl/following{/other_user}",
"gists_url": "https://api.github.com/users/tqangxl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tqangxl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tqangxl/subscriptions",
"organizations_url": "https://api.github.com/users/tqangxl/orgs",
"repos_url": "https://api.github.com/users/tqangxl/repos",
"events_url": "https://api.github.com/users/tqangxl/events{/privacy}",
"received_events_url": "https://api.github.com/users/tqangxl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5785/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5785/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1176
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1176/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1176/comments
|
https://api.github.com/repos/ollama/ollama/issues/1176/events
|
https://github.com/ollama/ollama/pull/1176
| 1,999,524,461
|
PR_kwDOJ0Z1Ps5fxZoN
| 1,176
|
update faq
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-17T16:41:59
| 2023-11-17T16:43:00
| 2023-11-17T16:42:59
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1176",
"html_url": "https://github.com/ollama/ollama/pull/1176",
"diff_url": "https://github.com/ollama/ollama/pull/1176.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1176.patch",
"merged_at": "2023-11-17T16:42:59"
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1176/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/277
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/277/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/277/comments
|
https://api.github.com/repos/ollama/ollama/issues/277/events
|
https://github.com/ollama/ollama/issues/277
| 1,836,114,449
|
I_kwDOJ0Z1Ps5tcOIR
| 277
|
error on macos
|
{
"login": "Fungungun",
"id": 24078180,
"node_id": "MDQ6VXNlcjI0MDc4MTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/24078180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fungungun",
"html_url": "https://github.com/Fungungun",
"followers_url": "https://api.github.com/users/Fungungun/followers",
"following_url": "https://api.github.com/users/Fungungun/following{/other_user}",
"gists_url": "https://api.github.com/users/Fungungun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fungungun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fungungun/subscriptions",
"organizations_url": "https://api.github.com/users/Fungungun/orgs",
"repos_url": "https://api.github.com/users/Fungungun/repos",
"events_url": "https://api.github.com/users/Fungungun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fungungun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 11
| 2023-08-04T05:56:17
| 2023-09-07T23:10:23
| 2023-09-07T13:22:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
~ ollama run llama2 ABRT х base Py 13:56:30
dyld: Symbol not found: _OBJC_CLASS_$_MTLComputePassDescriptor
Referenced from: /usr/local/bin/ollama
Expected in: /System/Library/Frameworks/Metal.framework/Versions/A/Metal
[1] 8409 abort ollama run llama2
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/277/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2871
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2871/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2871/comments
|
https://api.github.com/repos/ollama/ollama/issues/2871/events
|
https://github.com/ollama/ollama/issues/2871
| 2,164,435,693
|
I_kwDOJ0Z1Ps6BAqrt
| 2,871
|
ollama serve run error
|
{
"login": "songsh",
"id": 2272252,
"node_id": "MDQ6VXNlcjIyNzIyNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2272252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songsh",
"html_url": "https://github.com/songsh",
"followers_url": "https://api.github.com/users/songsh/followers",
"following_url": "https://api.github.com/users/songsh/following{/other_user}",
"gists_url": "https://api.github.com/users/songsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songsh/subscriptions",
"organizations_url": "https://api.github.com/users/songsh/orgs",
"repos_url": "https://api.github.com/users/songsh/repos",
"events_url": "https://api.github.com/users/songsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/songsh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 11
| 2024-03-02T00:55:46
| 2024-04-09T21:27:51
| 2024-03-08T08:09:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
when i run ollama serve , error is:
Couldn't find '/home/wangbin/.ollama/id_ed25519'. Generating new private key.
Your new public key is:
xxxxxxxxx
Error: listen tcp 127.0.0.1:11434: bind: address already in use
when i run npm run dev ,error is:
> chatbot-ollama@0.1.0 dev
> next dev
sh: 1: next: not found
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2871/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6079
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6079/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6079/comments
|
https://api.github.com/repos/ollama/ollama/issues/6079/events
|
https://github.com/ollama/ollama/pull/6079
| 2,438,635,480
|
PR_kwDOJ0Z1Ps52625q
| 6,079
|
Add metrics to `api/embed` docs
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-30T20:36:07
| 2024-08-07T21:43:46
| 2024-08-07T21:43:44
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6079",
"html_url": "https://github.com/ollama/ollama/pull/6079",
"diff_url": "https://github.com/ollama/ollama/pull/6079.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6079.patch",
"merged_at": "2024-08-07T21:43:44"
}
| null |
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6079/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3008
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3008/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3008/comments
|
https://api.github.com/repos/ollama/ollama/issues/3008/events
|
https://github.com/ollama/ollama/pull/3008
| 2,176,499,737
|
PR_kwDOJ0Z1Ps5pG2Ge
| 3,008
|
Finish unwinding idempotent payload logic
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-08T17:47:17
| 2024-03-09T17:13:28
| 2024-03-09T17:13:24
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3008",
"html_url": "https://github.com/ollama/ollama/pull/3008",
"diff_url": "https://github.com/ollama/ollama/pull/3008.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3008.patch",
"merged_at": "2024-03-09T17:13:24"
}
|
The recent ROCm change partially removed idempotent payloads, but the ggml-metal.metal file for mac was still idempotent. This finishes switching to always extract the payloads, and now that idempotentcy is gone, the version directory is no longer useful.
Tested on mac, linux, and windows.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3008/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5935
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5935/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5935/comments
|
https://api.github.com/repos/ollama/ollama/issues/5935/events
|
https://github.com/ollama/ollama/issues/5935
| 2,428,709,595
|
I_kwDOJ0Z1Ps6Qwyrb
| 5,935
|
ollama 0.2.8 doesn't support Multiple GPU H100
|
{
"login": "sksdev27",
"id": 55264082,
"node_id": "MDQ6VXNlcjU1MjY0MDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/55264082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sksdev27",
"html_url": "https://github.com/sksdev27",
"followers_url": "https://api.github.com/users/sksdev27/followers",
"following_url": "https://api.github.com/users/sksdev27/following{/other_user}",
"gists_url": "https://api.github.com/users/sksdev27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sksdev27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sksdev27/subscriptions",
"organizations_url": "https://api.github.com/users/sksdev27/orgs",
"repos_url": "https://api.github.com/users/sksdev27/repos",
"events_url": "https://api.github.com/users/sksdev27/events{/privacy}",
"received_events_url": "https://api.github.com/users/sksdev27/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-07-25T00:43:22
| 2024-07-30T21:39:38
| 2024-07-30T21:39:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
So when i launch the latest ollama 0.2.8 it uses one gpu but when i use ollama version 0.1.30 it uses all the gpu. The fix applied 0.1.30 didnt make it to 0.2.8
Here are the logs:
[log_ollama.txt](https://github.com/user-attachments/files/16368867/log_ollama.txt)
_Originally posted by @sksdev27 in https://github.com/ollama/ollama/issues/5024#issuecomment-2249121012_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5935/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5935/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3046
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3046/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3046/comments
|
https://api.github.com/repos/ollama/ollama/issues/3046/events
|
https://github.com/ollama/ollama/pull/3046
| 2,177,866,334
|
PR_kwDOJ0Z1Ps5pLSDa
| 3,046
|
Add ollama executable peer dir for rocm
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-10T19:16:42
| 2024-03-10T19:30:59
| 2024-03-10T19:30:56
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3046",
"html_url": "https://github.com/ollama/ollama/pull/3046",
"diff_url": "https://github.com/ollama/ollama/pull/3046.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3046.patch",
"merged_at": "2024-03-10T19:30:56"
}
|
This allows people who package up ollama on their own to place the rocm dependencies in a peer directory to the ollama executable much like our windows install flow.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3046/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7781
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7781/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7781/comments
|
https://api.github.com/repos/ollama/ollama/issues/7781/events
|
https://github.com/ollama/ollama/issues/7781
| 2,680,286,630
|
I_kwDOJ0Z1Ps6fwe2m
| 7,781
|
Llama3.2 Safetensors adapter not supported?
|
{
"login": "shuaib7860",
"id": 45211189,
"node_id": "MDQ6VXNlcjQ1MjExMTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/45211189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuaib7860",
"html_url": "https://github.com/shuaib7860",
"followers_url": "https://api.github.com/users/shuaib7860/followers",
"following_url": "https://api.github.com/users/shuaib7860/following{/other_user}",
"gists_url": "https://api.github.com/users/shuaib7860/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuaib7860/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuaib7860/subscriptions",
"organizations_url": "https://api.github.com/users/shuaib7860/orgs",
"repos_url": "https://api.github.com/users/shuaib7860/repos",
"events_url": "https://api.github.com/users/shuaib7860/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuaib7860/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-11-21T17:28:19
| 2024-11-21T17:28:43
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi Guys,
First thanks for creating such a wonderful tool. I have run into a problem which I believe is a bug but I may be incorrect and I could be asking for a feature request. Basically I have fine-tuned a llama3.2 model on some local data and then saved this in safetensors format.
Now I create a model file like below and point it to the safe tensor adapter files. Now when I run the command
ollama create myllama3.2 --file myllama3.2.modelfile
I see the below output which I believe tells me it has been a success.
transferring model data 100%
converting model
But when I run the Ollama list command my newly created model is not displayed. My modelfile starts like the below providing the directory where the safetensors adapter file is stored:
FROM llama3.2:1b
ADAPTER /home/shuaib/tmp/Llama-3.2-1B/
In the Ollama docs and the modelfile markdown file [https://github.com/ollama/ollama/blob/main/docs/modelfile.md](url) it says:
Currently supported Safetensor adapters:
Llama (including Llama 2, Llama 3, and Llama 3.1)
Is this why my attempt at creating a model from my fine-tuned model is failing? Because the llama3.2 safetensor adapters are not supported yet? Or am I missing something? Also is there a plan to support the llamam3.2 safentensors adapters anytime soon?
If it is the case because of the lack of support for the llama3.2 safe tensors adapters, should my workaround be to convert my fine-tuned safetensors file into gguf format?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7781/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7781/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3173
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3173/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3173/comments
|
https://api.github.com/repos/ollama/ollama/issues/3173/events
|
https://github.com/ollama/ollama/pull/3173
| 2,189,656,366
|
PR_kwDOJ0Z1Ps5pzwvl
| 3,173
|
add `llm/ext_server` directory to `linguist-vendored`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-16T00:29:53
| 2024-03-16T00:46:47
| 2024-03-16T00:46:46
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3173",
"html_url": "https://github.com/ollama/ollama/pull/3173",
"diff_url": "https://github.com/ollama/ollama/pull/3173.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3173.patch",
"merged_at": "2024-03-16T00:46:46"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3173/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7614
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7614/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7614/comments
|
https://api.github.com/repos/ollama/ollama/issues/7614/events
|
https://github.com/ollama/ollama/pull/7614
| 2,648,375,344
|
PR_kwDOJ0Z1Ps6Bd2m8
| 7,614
|
support older GPUS
|
{
"login": "langstonmeister",
"id": 65471211,
"node_id": "MDQ6VXNlcjY1NDcxMjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/65471211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/langstonmeister",
"html_url": "https://github.com/langstonmeister",
"followers_url": "https://api.github.com/users/langstonmeister/followers",
"following_url": "https://api.github.com/users/langstonmeister/following{/other_user}",
"gists_url": "https://api.github.com/users/langstonmeister/gists{/gist_id}",
"starred_url": "https://api.github.com/users/langstonmeister/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langstonmeister/subscriptions",
"organizations_url": "https://api.github.com/users/langstonmeister/orgs",
"repos_url": "https://api.github.com/users/langstonmeister/repos",
"events_url": "https://api.github.com/users/langstonmeister/events{/privacy}",
"received_events_url": "https://api.github.com/users/langstonmeister/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-11T07:00:04
| 2024-11-11T07:04:46
| 2024-11-11T07:04:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7614",
"html_url": "https://github.com/ollama/ollama/pull/7614",
"diff_url": "https://github.com/ollama/ollama/pull/7614.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7614.patch",
"merged_at": null
}
|
These changes will allow us to compile and run ollama on older hardware. For some of us with older gpus around with lots of VRAM, it is helpful to be able to use them and not rely completely on CPU and RAM. It would be very nice to include them - even if there is a huge warning about how they might not work for everyone.
|
{
"login": "langstonmeister",
"id": 65471211,
"node_id": "MDQ6VXNlcjY1NDcxMjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/65471211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/langstonmeister",
"html_url": "https://github.com/langstonmeister",
"followers_url": "https://api.github.com/users/langstonmeister/followers",
"following_url": "https://api.github.com/users/langstonmeister/following{/other_user}",
"gists_url": "https://api.github.com/users/langstonmeister/gists{/gist_id}",
"starred_url": "https://api.github.com/users/langstonmeister/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langstonmeister/subscriptions",
"organizations_url": "https://api.github.com/users/langstonmeister/orgs",
"repos_url": "https://api.github.com/users/langstonmeister/repos",
"events_url": "https://api.github.com/users/langstonmeister/events{/privacy}",
"received_events_url": "https://api.github.com/users/langstonmeister/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7614/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3685
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3685/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3685/comments
|
https://api.github.com/repos/ollama/ollama/issues/3685/events
|
https://github.com/ollama/ollama/issues/3685
| 2,247,070,758
|
I_kwDOJ0Z1Ps6F75Qm
| 3,685
|
`FROM .` in Modelfile creates invalid model
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-04-17T00:17:36
| 2024-04-24T22:13:48
| 2024-04-24T22:13:48
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When including `FROM .` in a Modelfile, an incorrect Modelfile will be generated if `.` elsewhere:
```
ollama show octopus --modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM octopus:latest
FROM /Users/jmorgan/.ollama/models/blobs/sha256-33fdea506a5c3f5fde0112d8b8f50c4c009da49469c9bc2ffa973963e445f898
TEMPLATE """{{ @sha256:4e4d269e842069af0c07482684ed8779104f3ecf7cded7a1edd432a1bff54effSystem }}
Query: {{ @sha256:4e4d269e842069af0c07482684ed8779104f3ecf7cded7a1edd432a1bff54effPrompt }}
Response: {{ @sha256:4e4d269e842069af0c07482684ed8779104f3ecf7cded7a1edd432a1bff54effResponse }}"""
SYSTEM """Below is the query from the users, please call the correct function and generate the parameters to call the function@sha256:4e4d269e842069af0c07482684ed8779104f3ecf7cded7a1edd432a1bff54eff"""
```
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3685/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3039
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3039/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3039/comments
|
https://api.github.com/repos/ollama/ollama/issues/3039/events
|
https://github.com/ollama/ollama/issues/3039
| 2,177,684,715
|
I_kwDOJ0Z1Ps6BzNTr
| 3,039
|
bug: Can't push MobileVLM (mmproj?) model
|
{
"login": "knoopx",
"id": 100993,
"node_id": "MDQ6VXNlcjEwMDk5Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/100993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/knoopx",
"html_url": "https://github.com/knoopx",
"followers_url": "https://api.github.com/users/knoopx/followers",
"following_url": "https://api.github.com/users/knoopx/following{/other_user}",
"gists_url": "https://api.github.com/users/knoopx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/knoopx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/knoopx/subscriptions",
"organizations_url": "https://api.github.com/users/knoopx/orgs",
"repos_url": "https://api.github.com/users/knoopx/repos",
"events_url": "https://api.github.com/users/knoopx/events{/privacy}",
"received_events_url": "https://api.github.com/users/knoopx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-03-10T12:07:15
| 2024-03-12T21:40:57
| 2024-03-11T20:26:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can't push this specific model. Model works fine.
### Details
Source model:
https://huggingface.co/Blombert/MobileVLM-3B-GGUF
Ollama version:
```
(models) knoopx@desktop ~/P/k/models [1]> ollama --version
ollama version is 0.1.28
```
### Modelfile
```
FROM /home/knoopx/Downloads/ggml-model-f16.gguf
FROM /home/knoopx/Downloads/mmproj-model-f16.gguf
TEMPLATE """Q: {{ .Prompt }}
A:"""
PARAMETER stop "Q:"
PARAMETER stop "A:"
```
### Error Log
```
ollama create knoopx/MobileVLM:3b-fp16 -f modelfiles/mobilevlm.modelfile
transferring model data
creating model layer
creating template layer
creating parameters layer
creating config layer
using already created layer sha256:f6ad543613bdfe69289b83a24085e950db4385bac2924cdda1c6702fb9f47923
using already created layer sha256:c8dda06f00ae031c5428591603a4c3a27dbbf4414a1e269ede4d1c450ca31d51
writing layer sha256:8463e3eb5608b7fcea99e47811e2a274e3df65acb7fb75d63f15a8e57109f1af
writing layer sha256:8aab0873042fccf139c01e3d781f6ea678655dd72ee229cadea062733fc66a5e
writing layer sha256:627ed9e87a50a78904ce4538eba0711c8cf0e9304b4925864fa2eff31ce07bb3
writing manifest
success
(models) knoopx@desktop ~/P/k/models> ollama push knoopx/MobileVLM:3b-fp16
retrieving manifest
Error: file does not exist
```
Nothing useful logged with `OLLAMA_DEBUG`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3039/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/107
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/107/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/107/comments
|
https://api.github.com/repos/ollama/ollama/issues/107/events
|
https://github.com/ollama/ollama/issues/107
| 1,810,849,731
|
I_kwDOJ0Z1Ps5r71_D
| 107
|
starcoder support (code models)
|
{
"login": "nathanleclaire",
"id": 1476820,
"node_id": "MDQ6VXNlcjE0NzY4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanleclaire",
"html_url": "https://github.com/nathanleclaire",
"followers_url": "https://api.github.com/users/nathanleclaire/followers",
"following_url": "https://api.github.com/users/nathanleclaire/following{/other_user}",
"gists_url": "https://api.github.com/users/nathanleclaire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathanleclaire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathanleclaire/subscriptions",
"organizations_url": "https://api.github.com/users/nathanleclaire/orgs",
"repos_url": "https://api.github.com/users/nathanleclaire/repos",
"events_url": "https://api.github.com/users/nathanleclaire/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathanleclaire/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-07-18T23:12:49
| 2023-10-01T06:53:22
| 2023-10-01T06:53:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I wonder if https://github.com/bigcode-project/starcoder.cpp support could be added, I would be pretty curious to check out https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder using ollama
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/107/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/107/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4169
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4169/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4169/comments
|
https://api.github.com/repos/ollama/ollama/issues/4169/events
|
https://github.com/ollama/ollama/issues/4169
| 2,279,555,218
|
I_kwDOJ0Z1Ps6H30CS
| 4,169
|
Error: invalid character '\x00' looking for beginning of value
|
{
"login": "TapsHTS",
"id": 61658427,
"node_id": "MDQ6VXNlcjYxNjU4NDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/61658427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TapsHTS",
"html_url": "https://github.com/TapsHTS",
"followers_url": "https://api.github.com/users/TapsHTS/followers",
"following_url": "https://api.github.com/users/TapsHTS/following{/other_user}",
"gists_url": "https://api.github.com/users/TapsHTS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TapsHTS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TapsHTS/subscriptions",
"organizations_url": "https://api.github.com/users/TapsHTS/orgs",
"repos_url": "https://api.github.com/users/TapsHTS/repos",
"events_url": "https://api.github.com/users/TapsHTS/events{/privacy}",
"received_events_url": "https://api.github.com/users/TapsHTS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-05-05T14:18:22
| 2024-07-12T19:03:55
| 2024-07-11T03:22:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When i run: `ollama run llama3:8b` or `ollama run llama3` I have:
Error: invalid character '\x00' looking for beginning of value
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.33
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4169/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7200
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7200/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7200/comments
|
https://api.github.com/repos/ollama/ollama/issues/7200/events
|
https://github.com/ollama/ollama/issues/7200
| 2,586,781,066
|
I_kwDOJ0Z1Ps6aLyWK
| 7,200
|
Hugging Face Idefics3
|
{
"login": "haimat",
"id": 6633976,
"node_id": "MDQ6VXNlcjY2MzM5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6633976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haimat",
"html_url": "https://github.com/haimat",
"followers_url": "https://api.github.com/users/haimat/followers",
"following_url": "https://api.github.com/users/haimat/following{/other_user}",
"gists_url": "https://api.github.com/users/haimat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haimat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haimat/subscriptions",
"organizations_url": "https://api.github.com/users/haimat/orgs",
"repos_url": "https://api.github.com/users/haimat/repos",
"events_url": "https://api.github.com/users/haimat/events{/privacy}",
"received_events_url": "https://api.github.com/users/haimat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 1
| 2024-10-14T18:42:21
| 2024-10-14T19:09:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
There is a new Idefics3 model on Hugginf Face, based on Llama3:
https://huggingface.co/docs/transformers/main/en/model_doc/idefics3#idefics3
Any chance you can add this to Ollama?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7200/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6757
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6757/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6757/comments
|
https://api.github.com/repos/ollama/ollama/issues/6757/events
|
https://github.com/ollama/ollama/pull/6757
| 2,520,405,760
|
PR_kwDOJ0Z1Ps57MqFB
| 6,757
|
DO NOT MERGE - ci testing
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-11T18:03:17
| 2024-09-11T21:52:47
| 2024-09-11T21:52:21
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6757",
"html_url": "https://github.com/ollama/ollama/pull/6757",
"diff_url": "https://github.com/ollama/ollama/pull/6757.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6757.patch",
"merged_at": null
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6757/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8234
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8234/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8234/comments
|
https://api.github.com/repos/ollama/ollama/issues/8234/events
|
https://github.com/ollama/ollama/issues/8234
| 2,758,183,386
|
I_kwDOJ0Z1Ps6kZona
| 8,234
|
Standard Linux install includes CUDA libraries even if unused
|
{
"login": "lamyergeier",
"id": 42092626,
"node_id": "MDQ6VXNlcjQyMDkyNjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/42092626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lamyergeier",
"html_url": "https://github.com/lamyergeier",
"followers_url": "https://api.github.com/users/lamyergeier/followers",
"following_url": "https://api.github.com/users/lamyergeier/following{/other_user}",
"gists_url": "https://api.github.com/users/lamyergeier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lamyergeier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lamyergeier/subscriptions",
"organizations_url": "https://api.github.com/users/lamyergeier/orgs",
"repos_url": "https://api.github.com/users/lamyergeier/repos",
"events_url": "https://api.github.com/users/lamyergeier/events{/privacy}",
"received_events_url": "https://api.github.com/users/lamyergeier/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-12-24T18:14:29
| 2025-01-06T17:36:18
| 2025-01-06T17:36:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The install script command `curl -fsSL https://ollama.com/install.sh | sh`
downloads following files
```
$ tree
.
├── libcublas.so.11 -> libcublas.so.11.5.1.109
├── libcublas.so.11.5.1.109
├── libcublas.so.12 -> ./libcublas.so.12.4.5.8
├── libcublas.so.12.4.5.8
├── libcublasLt.so.11 -> libcublasLt.so.11.5.1.109
├── libcublasLt.so.11.5.1.109
├── libcublasLt.so.12 -> ./libcublasLt.so.12.4.5.8
├── libcublasLt.so.12.4.5.8
├── libcudart.so.11.0 -> libcudart.so.11.3.109
├── libcudart.so.11.3.109
├── libcudart.so.12 -> libcudart.so.12.4.127
├── libcudart.so.12.4.127
└── runners
├── cpu_avx
│ └── ollama_llama_server
├── cpu_avx2
│ └── ollama_llama_server
├── cuda_v11_avx
│ ├── libggml_cuda_v11.so
│ └── ollama_llama_server
├── cuda_v12_avx
│ ├── libggml_cuda_v12.so
│ └── ollama_llama_server
└── rocm_avx
├── libggml_rocm.so
└── ollama_llama_server
```
This includes massive downloads for CUDA:
``` bash
$ command du -h
943M ./runners/cuda_v11_avx
9.4M ./runners/cpu_avx2
9.4M ./runners/cpu_avx
440M ./runners/rocm_avx
1.2G ./runners/cuda_v12_avx
2.6G ./runners
3.5G .
```

# Issue
Please do not download the excessive unnecessary files if the device does not have nvidia GPU.
### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8234/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5561
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5561/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5561/comments
|
https://api.github.com/repos/ollama/ollama/issues/5561/events
|
https://github.com/ollama/ollama/issues/5561
| 2,397,101,802
|
I_kwDOJ0Z1Ps6O4N7q
| 5,561
|
Ollama working issue
|
{
"login": "mayurab22",
"id": 143072257,
"node_id": "U_kgDOCIccAQ",
"avatar_url": "https://avatars.githubusercontent.com/u/143072257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mayurab22",
"html_url": "https://github.com/mayurab22",
"followers_url": "https://api.github.com/users/mayurab22/followers",
"following_url": "https://api.github.com/users/mayurab22/following{/other_user}",
"gists_url": "https://api.github.com/users/mayurab22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mayurab22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayurab22/subscriptions",
"organizations_url": "https://api.github.com/users/mayurab22/orgs",
"repos_url": "https://api.github.com/users/mayurab22/repos",
"events_url": "https://api.github.com/users/mayurab22/events{/privacy}",
"received_events_url": "https://api.github.com/users/mayurab22/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-07-09T04:40:35
| 2024-09-17T15:58:39
| 2024-09-17T15:58:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have installed ollama in my system and its running in the background but while downloading and running the models there is some issue, I am unable to pull or run any model in my windows system.
This are the two error messages that was shown while trying to downloading any model in both administrator cmd and command prompt I tried:
1)Error: pull model manifest: Get "[https://registry.ollama.ai/v2/library/gemma2/manifests/latest":](https://registry.ollama.ai/v2/library/gemma2/manifests/latest%22:)
net/http: TLS handshake timeout
2)'D:ollama' is not recognized as an internal or external command,
operable program or batch file.
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5561/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7473
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7473/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7473/comments
|
https://api.github.com/repos/ollama/ollama/issues/7473/events
|
https://github.com/ollama/ollama/pull/7473
| 2,630,849,083
|
PR_kwDOJ0Z1Ps6AtjVs
| 7,473
|
nvidia libs have inconsistent ordering
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-02T22:59:02
| 2024-11-02T23:35:43
| 2024-11-02T23:35:41
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7473",
"html_url": "https://github.com/ollama/ollama/pull/7473",
"diff_url": "https://github.com/ollama/ollama/pull/7473.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7473.patch",
"merged_at": "2024-11-02T23:35:41"
}
|
The runtime and management libraries may not always have identical ordering, so use the device UUID to correlate instead of ID.
Fixes #7429
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7473/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5943
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5943/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5943/comments
|
https://api.github.com/repos/ollama/ollama/issues/5943/events
|
https://github.com/ollama/ollama/issues/5943
| 2,429,513,363
|
I_kwDOJ0Z1Ps6Qz26T
| 5,943
|
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.1/manifests/latest": net/http: TLS handshake timeout
|
{
"login": "kid1milli",
"id": 176576130,
"node_id": "U_kgDOCoZWgg",
"avatar_url": "https://avatars.githubusercontent.com/u/176576130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kid1milli",
"html_url": "https://github.com/kid1milli",
"followers_url": "https://api.github.com/users/kid1milli/followers",
"following_url": "https://api.github.com/users/kid1milli/following{/other_user}",
"gists_url": "https://api.github.com/users/kid1milli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kid1milli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kid1milli/subscriptions",
"organizations_url": "https://api.github.com/users/kid1milli/orgs",
"repos_url": "https://api.github.com/users/kid1milli/repos",
"events_url": "https://api.github.com/users/kid1milli/events{/privacy}",
"received_events_url": "https://api.github.com/users/kid1milli/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-07-25T09:42:21
| 2024-10-26T03:41:50
| 2024-09-17T15:44:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when I deploy llama3.1 on ollma on Windows, the system displays Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.1/manifests/latest": net/http: TLS handshake timeout
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5943/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/5894
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5894/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5894/comments
|
https://api.github.com/repos/ollama/ollama/issues/5894/events
|
https://github.com/ollama/ollama/pull/5894
| 2,426,214,323
|
PR_kwDOJ0Z1Ps52RRnR
| 5,894
|
feat: K/V cache quantisation (massive vRAM improvement!)
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 45
| 2024-07-23T22:16:36
| 2024-08-09T07:27:23
| 2024-08-09T06:45:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5894",
"html_url": "https://github.com/ollama/ollama/pull/5894",
"diff_url": "https://github.com/ollama/ollama/pull/5894.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5894.patch",
"merged_at": null
}
|
# THIS PR HAS MOVED TO https://github.com/ollama/ollama/pull/6279
---
This PR introduces optional K/V (context) cache quantisation.
In addition the deprecated `F16KV` parameter has been removed, if a user wishes for some reason to run the KV at f32, they can provide that as an option.
## Impact
- With defaults (f16) - none, behaviour is the same as the current defaults.
- With q8_0
- **The K/V context cache will consume 1/2 the vRAM** (!)
- A _very_ small loss in quality within the cache
- With q4_0
- **the K/V context cache will consume 1/4 the vRAM** (!!)
- A small/medium loss in quality within the cache
- For example, loading llama3.1 8b with a 32K context drops vRAM usage by cache from 4GB to 1.1GB
- The and q4_1 -> q5_1 in between.
Additional quantisations supported by llama.cpp and this PR that may depend on the quantisation of the model you're running:
`q5_1`, `q5_0`, `q4_1`, `iq4_nl`
- Fixes https://github.com/ollama/ollama/issues/5091
- Related discussion in llama.cpp - https://github.com/ggerganov/llama.cpp/discussions/5932
- (Note that ExllamaV2 has a similar feature - https://github.com/turboderp/exllamav2/blob/master/doc/qcache_eval.md)
## Screenshots
Example of estimated (v)RAM savings - f16 (q8_0,q4_0)
<img width="1211" alt="image" src="https://github.com/user-attachments/assets/a3520770-7b31-40c7-b45b-4aad6db9b117">
### f16

### q4_0

### q8_0

## Performance
llama.cpp did some perplexity measurements (although looking at the commits things have likely improved even further since May when they were done, and CUDA graphs were later fixed etc....): https://github.com/ggerganov/llama.cpp/pull/7412#issuecomment-2120427347
As far as I can tell (at least with q6_k quants) there isn't much of a noticeable hit to performance.
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5894/reactions",
"total_count": 35,
"+1": 20,
"-1": 0,
"laugh": 0,
"hooray": 6,
"confused": 0,
"heart": 9,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5894/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6008
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6008/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6008/comments
|
https://api.github.com/repos/ollama/ollama/issues/6008/events
|
https://github.com/ollama/ollama/issues/6008
| 2,433,313,420
|
I_kwDOJ0Z1Ps6RCWqM
| 6,008
|
Ollama is running on both CPU and GPU - expected to use GPU only
|
{
"login": "wxletter",
"id": 55588907,
"node_id": "MDQ6VXNlcjU1NTg4OTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/55588907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wxletter",
"html_url": "https://github.com/wxletter",
"followers_url": "https://api.github.com/users/wxletter/followers",
"following_url": "https://api.github.com/users/wxletter/following{/other_user}",
"gists_url": "https://api.github.com/users/wxletter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wxletter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wxletter/subscriptions",
"organizations_url": "https://api.github.com/users/wxletter/orgs",
"repos_url": "https://api.github.com/users/wxletter/repos",
"events_url": "https://api.github.com/users/wxletter/events{/privacy}",
"received_events_url": "https://api.github.com/users/wxletter/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 21
| 2024-07-27T06:52:06
| 2025-01-24T09:06:49
| 2024-07-29T17:16:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**What is the issue?**
When I run "ollama run llama3.1:70b", I can see that 22.9/24 GB of dedicated GPU memory is used, and 18.9/31.9 GB of shared GPU memory is used (it's in Chinese so I did the translation).
<img width="301" alt="6f29d8b7b0c24ad60fcc36b0cf56593" src="https://github.com/user-attachments/assets/3f64fa11-684f-445a-bbfc-f06155829d9c">
From "server.log" I can see "offloaded 42/81 layers to GPU", and when I'm chatting with llama3.1 the response is very slow, "ollama ps" shows:

Memory should be enough to run this model, then why only 42/81 layers are offloaded to GPU, and ollama is still using CPU? Is there a way to force ollama to use GPU? Server log attached, let me know if there's any other info that could be helpful.
**OS**
Windows11
**GPU**
Nvidia RTX 4090
**CPU**
Intel i7 13700KF
**RAM**
64GB
**Ollama version**
0.3.0
[server.log](https://github.com/user-attachments/files/16398489/server.log)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6008/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6008/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8315
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8315/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8315/comments
|
https://api.github.com/repos/ollama/ollama/issues/8315/events
|
https://github.com/ollama/ollama/pull/8315
| 2,769,880,081
|
PR_kwDOJ0Z1Ps6GyDLK
| 8,315
|
example: update langchain-python-rag-websummary, resolve deprecated class problem
|
{
"login": "Talen-520",
"id": 63370853,
"node_id": "MDQ6VXNlcjYzMzcwODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/63370853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Talen-520",
"html_url": "https://github.com/Talen-520",
"followers_url": "https://api.github.com/users/Talen-520/followers",
"following_url": "https://api.github.com/users/Talen-520/following{/other_user}",
"gists_url": "https://api.github.com/users/Talen-520/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Talen-520/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Talen-520/subscriptions",
"organizations_url": "https://api.github.com/users/Talen-520/orgs",
"repos_url": "https://api.github.com/users/Talen-520/repos",
"events_url": "https://api.github.com/users/Talen-520/events{/privacy}",
"received_events_url": "https://api.github.com/users/Talen-520/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-01-06T05:49:19
| 2025-01-14T17:44:05
| 2025-01-14T17:44:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8315",
"html_url": "https://github.com/ollama/ollama/pull/8315",
"diff_url": "https://github.com/ollama/ollama/pull/8315.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8315.patch",
"merged_at": null
}
|
**Issue**
When running the original code, the following deprecation warning is encountered:
```bash
LangChainDeprecationWarning: The class Ollama was deprecated in LangChain 0.3.1 and will be removed in 1.0.0.
```
**Changes Made**
- Replaced the deprecated Ollama class with the updated implementation, it work with most recent LangChain version.
- Added BeautifulSoup4 to the project dependencies to support WebBaseLoader.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8315/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5929
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5929/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5929/comments
|
https://api.github.com/repos/ollama/ollama/issues/5929/events
|
https://github.com/ollama/ollama/issues/5929
| 2,428,509,272
|
I_kwDOJ0Z1Ps6QwBxY
| 5,929
|
Second Disk Support
|
{
"login": "wioniqle-q",
"id": 69215407,
"node_id": "MDQ6VXNlcjY5MjE1NDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/69215407?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wioniqle-q",
"html_url": "https://github.com/wioniqle-q",
"followers_url": "https://api.github.com/users/wioniqle-q/followers",
"following_url": "https://api.github.com/users/wioniqle-q/following{/other_user}",
"gists_url": "https://api.github.com/users/wioniqle-q/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wioniqle-q/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wioniqle-q/subscriptions",
"organizations_url": "https://api.github.com/users/wioniqle-q/orgs",
"repos_url": "https://api.github.com/users/wioniqle-q/repos",
"events_url": "https://api.github.com/users/wioniqle-q/events{/privacy}",
"received_events_url": "https://api.github.com/users/wioniqle-q/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-24T21:31:19
| 2024-12-06T17:47:15
| 2024-07-24T21:39:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have a second disc that have large capacity but when try the I install ollama.exe (for windows) that install models directly USERS/USERNAME/.ollama installing
but I don't have any space on my C drive. So That I need to run for a on my second disc that I can download models into second disc and download the models to the second disc
|
{
"login": "wioniqle-q",
"id": 69215407,
"node_id": "MDQ6VXNlcjY5MjE1NDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/69215407?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wioniqle-q",
"html_url": "https://github.com/wioniqle-q",
"followers_url": "https://api.github.com/users/wioniqle-q/followers",
"following_url": "https://api.github.com/users/wioniqle-q/following{/other_user}",
"gists_url": "https://api.github.com/users/wioniqle-q/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wioniqle-q/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wioniqle-q/subscriptions",
"organizations_url": "https://api.github.com/users/wioniqle-q/orgs",
"repos_url": "https://api.github.com/users/wioniqle-q/repos",
"events_url": "https://api.github.com/users/wioniqle-q/events{/privacy}",
"received_events_url": "https://api.github.com/users/wioniqle-q/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5929/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4482
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4482/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4482/comments
|
https://api.github.com/repos/ollama/ollama/issues/4482/events
|
https://github.com/ollama/ollama/pull/4482
| 2,301,540,456
|
PR_kwDOJ0Z1Ps5vumZH
| 4,482
|
Skip max queue test on remote
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-16T23:26:04
| 2024-05-16T23:43:51
| 2024-05-16T23:43:48
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4482",
"html_url": "https://github.com/ollama/ollama/pull/4482",
"diff_url": "https://github.com/ollama/ollama/pull/4482.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4482.patch",
"merged_at": "2024-05-16T23:43:48"
}
|
This test needs to be able to adjust the queue size down from our default setting for a reliable test, so it needs to skip on remote test execution mode.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4482/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/158
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/158/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/158/comments
|
https://api.github.com/repos/ollama/ollama/issues/158/events
|
https://github.com/ollama/ollama/issues/158
| 1,815,440,468
|
I_kwDOJ0Z1Ps5sNWxU
| 158
|
Build fails with `server/routes.go:53:20: undefined: llama.New`
|
{
"login": "gbro3n",
"id": 3813371,
"node_id": "MDQ6VXNlcjM4MTMzNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3813371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbro3n",
"html_url": "https://github.com/gbro3n",
"followers_url": "https://api.github.com/users/gbro3n/followers",
"following_url": "https://api.github.com/users/gbro3n/following{/other_user}",
"gists_url": "https://api.github.com/users/gbro3n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbro3n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbro3n/subscriptions",
"organizations_url": "https://api.github.com/users/gbro3n/orgs",
"repos_url": "https://api.github.com/users/gbro3n/repos",
"events_url": "https://api.github.com/users/gbro3n/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbro3n/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2023-07-21T09:04:12
| 2023-07-21T20:15:23
| 2023-07-21T20:15:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Running build from source command:
```
go build .
```
Results in the following error:
```
# github.com/jmorganca/ollama/server
server/routes.go:53:20: undefined: llama.New
```
Context, building on Ubuntu 22.04 using go1.20.6 linux/amd64
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/158/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6721
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6721/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6721/comments
|
https://api.github.com/repos/ollama/ollama/issues/6721/events
|
https://github.com/ollama/ollama/issues/6721
| 2,515,550,382
|
I_kwDOJ0Z1Ps6V8ECu
| 6,721
|
Error loading model architecture for miniCPM3-4B: Unknown architecture 'minicpm3'
|
{
"login": "ChuiyuWang1",
"id": 51296659,
"node_id": "MDQ6VXNlcjUxMjk2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/51296659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChuiyuWang1",
"html_url": "https://github.com/ChuiyuWang1",
"followers_url": "https://api.github.com/users/ChuiyuWang1/followers",
"following_url": "https://api.github.com/users/ChuiyuWang1/following{/other_user}",
"gists_url": "https://api.github.com/users/ChuiyuWang1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChuiyuWang1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChuiyuWang1/subscriptions",
"organizations_url": "https://api.github.com/users/ChuiyuWang1/orgs",
"repos_url": "https://api.github.com/users/ChuiyuWang1/repos",
"events_url": "https://api.github.com/users/ChuiyuWang1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChuiyuWang1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-09-10T06:44:48
| 2024-10-14T01:45:14
| 2024-09-12T01:17:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello Ollama team,
First of all, I want to express my appreciation for the amazing work you’ve done with Ollama. The tool has been incredibly helpful, and I hope your team continues to thrive and build even more powerful features!
I’ve encountered an issue while trying to use the miniCPM3-4B model with Ollama for inference.
Since there is no miniCPM3-4B model in ollama library, I used `yefx/minicpm3_4b` and `shibing624/minicpm3_4b`. When I perform inference using these models, the error shows
`llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'minicpm3'`
I'm using ollama version 0.3.10 and Windows 11 OS.
I have two questions here:
1. Does Ollama currently support the miniCPM3-4B model architecture?
2. If not, are there any plans to support it, or is there a workaround that would allow me to use this model with Ollama?
Thank you for your time and for any guidance you can provide!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6721/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1964
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1964/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1964/comments
|
https://api.github.com/repos/ollama/ollama/issues/1964/events
|
https://github.com/ollama/ollama/issues/1964
| 2,079,695,657
|
I_kwDOJ0Z1Ps579aMp
| 1,964
|
Self-extend support
|
{
"login": "coder543",
"id": 726063,
"node_id": "MDQ6VXNlcjcyNjA2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/726063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coder543",
"html_url": "https://github.com/coder543",
"followers_url": "https://api.github.com/users/coder543/followers",
"following_url": "https://api.github.com/users/coder543/following{/other_user}",
"gists_url": "https://api.github.com/users/coder543/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coder543/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coder543/subscriptions",
"organizations_url": "https://api.github.com/users/coder543/orgs",
"repos_url": "https://api.github.com/users/coder543/repos",
"events_url": "https://api.github.com/users/coder543/events{/privacy}",
"received_events_url": "https://api.github.com/users/coder543/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 7
| 2024-01-12T20:49:48
| 2024-11-06T19:03:40
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I’m not sure what all would be involved, but something that’s making waves is “self extend”, where it seems to be possible to make models work at larger context sizes than what they were originally designed for.
In a hypothetical outcome, it would be amazing if models were automatically self-extended when the requested context is larger than the trained context.
Some relevant links:
https://www.reddit.com/r/LocalLLaMA/comments/194mmki/selfextend_works_for_phi2_now_looks_good/
https://github.com/ggerganov/llama.cpp/pull/4889
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1964/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1964/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/513
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/513/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/513/comments
|
https://api.github.com/repos/ollama/ollama/issues/513/events
|
https://github.com/ollama/ollama/issues/513
| 1,892,210,840
|
I_kwDOJ0Z1Ps5wyNiY
| 513
|
Make models directory customizable
|
{
"login": "tastycode",
"id": 809953,
"node_id": "MDQ6VXNlcjgwOTk1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/809953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tastycode",
"html_url": "https://github.com/tastycode",
"followers_url": "https://api.github.com/users/tastycode/followers",
"following_url": "https://api.github.com/users/tastycode/following{/other_user}",
"gists_url": "https://api.github.com/users/tastycode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tastycode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tastycode/subscriptions",
"organizations_url": "https://api.github.com/users/tastycode/orgs",
"repos_url": "https://api.github.com/users/tastycode/repos",
"events_url": "https://api.github.com/users/tastycode/events{/privacy}",
"received_events_url": "https://api.github.com/users/tastycode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-09-12T10:28:46
| 2023-09-30T05:04:15
| 2023-09-30T05:04:15
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/jmorganca/ollama/blob/45ac07cd025f9d1e84917db3f00e0f3e5651aede/server/modelpath.go#L135C7-L136C31
If you are often working with different AI packages, you know that the models add up quickly. I keep my models on an external drive. The user to at least customize this behavior with an environment variable. I am not up-to-speed on go tooling, otherwise I'd be happy to open a PR.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/513/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/513/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.