url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/3948
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3948/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3948/comments
|
https://api.github.com/repos/ollama/ollama/issues/3948/events
|
https://github.com/ollama/ollama/pull/3948
| 2,266,060,659
|
PR_kwDOJ0Z1Ps5t25PB
| 3,948
|
Refactor windows generate for more modular usage
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-26T15:48:09
| 2024-04-26T16:17:23
| 2024-04-26T16:17:20
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3948",
"html_url": "https://github.com/ollama/ollama/pull/3948",
"diff_url": "https://github.com/ollama/ollama/pull/3948.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3948.patch",
"merged_at": "2024-04-26T16:17:20"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3948/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6659
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6659/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6659/comments
|
https://api.github.com/repos/ollama/ollama/issues/6659/events
|
https://github.com/ollama/ollama/pull/6659
| 2,508,676,049
|
PR_kwDOJ0Z1Ps56ktDm
| 6,659
|
Detect running in a container follow up
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-09-05T20:42:59
| 2024-09-05T21:19:33
| 2024-09-05T21:19:30
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6659",
"html_url": "https://github.com/ollama/ollama/pull/6659",
"diff_url": "https://github.com/ollama/ollama/pull/6659.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6659.patch",
"merged_at": null
}
|
Address additional review comments post merge.
Follow up to #6495
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6659/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/395
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/395/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/395/comments
|
https://api.github.com/repos/ollama/ollama/issues/395/events
|
https://github.com/ollama/ollama/pull/395
| 1,860,999,263
|
PR_kwDOJ0Z1Ps5YeVUJ
| 395
|
Document what happens upon first app launch
|
{
"login": "justinmayer",
"id": 1503700,
"node_id": "MDQ6VXNlcjE1MDM3MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1503700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justinmayer",
"html_url": "https://github.com/justinmayer",
"followers_url": "https://api.github.com/users/justinmayer/followers",
"following_url": "https://api.github.com/users/justinmayer/following{/other_user}",
"gists_url": "https://api.github.com/users/justinmayer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justinmayer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justinmayer/subscriptions",
"organizations_url": "https://api.github.com/users/justinmayer/orgs",
"repos_url": "https://api.github.com/users/justinmayer/repos",
"events_url": "https://api.github.com/users/justinmayer/events{/privacy}",
"received_events_url": "https://api.github.com/users/justinmayer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 11
| 2023-08-22T09:07:03
| 2024-10-19T23:06:05
| 2023-10-24T19:45:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/395",
"html_url": "https://github.com/ollama/ollama/pull/395",
"diff_url": "https://github.com/ollama/ollama/pull/395.diff",
"patch_url": "https://github.com/ollama/ollama/pull/395.patch",
"merged_at": null
}
|
End users should be informed regarding what will happen upon first launch of the application, including what directories are created, where downloaded models will be stored, what background processes will be launched, and what system-level changes will be made.
In the long run, hopefully these behaviors will be changed to allow for more flexibility and customization, but for now it is important for end users to have a clear understanding of what _currently_ happens when launching the application for the first time.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/395/reactions",
"total_count": 35,
"+1": 24,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 11,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/395/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8588
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8588/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8588/comments
|
https://api.github.com/repos/ollama/ollama/issues/8588/events
|
https://github.com/ollama/ollama/issues/8588
| 2,811,325,311
|
I_kwDOJ0Z1Ps6nkWt_
| 8,588
|
Qwen not recognizing function / tools
|
{
"login": "Poly-Frag",
"id": 195621056,
"node_id": "U_kgDOC6jwwA",
"avatar_url": "https://avatars.githubusercontent.com/u/195621056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Poly-Frag",
"html_url": "https://github.com/Poly-Frag",
"followers_url": "https://api.github.com/users/Poly-Frag/followers",
"following_url": "https://api.github.com/users/Poly-Frag/following{/other_user}",
"gists_url": "https://api.github.com/users/Poly-Frag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Poly-Frag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Poly-Frag/subscriptions",
"organizations_url": "https://api.github.com/users/Poly-Frag/orgs",
"repos_url": "https://api.github.com/users/Poly-Frag/repos",
"events_url": "https://api.github.com/users/Poly-Frag/events{/privacy}",
"received_events_url": "https://api.github.com/users/Poly-Frag/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 11
| 2025-01-26T05:11:59
| 2025-01-26T22:40:02
| 2025-01-26T22:40:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Qwen2.5:14b-instruct has no access to tools / functions. I also used majx13/test to see if it was a model issue but it isn't, so its probably a mistake I made or a bug with Ollama.
Here is the code im using.
```python
from helpers import user
from ollama import chat
import chainlit as cl
import traceback
# Variables
rooms = ['master bedroom', 'bedroom 001', 'bedroom 002',
'bathroom' , 'living room', 'family room']
tools = [
{
"name": "toggle_lights",
"description": "Turn on or off lights in a given room",
"strict": True,
"parameters": {
"type": "object",
"required": ["room", "state"],
"properties": {
"room": {
"type": "string",
"description": "The name of the room where the lights will be toggled",
},
"state": {
"type": "string",
"description": "The desired state of the lights",
"enum": ["on", "off"],
},
},
"additionalProperties": False,
},
},
{
"name": "set_temperature",
"description": "Set temperature in a given room, to a given temperature",
"strict": True,
"parameters": {
"type": "object",
"required": ["room", "temp"],
"properties": {
"room": {
"type": "string",
"description": "Room name (eg Living Room, or Bathroom, or Bedroom",
},
"temp": {
"type": "number",
"description": "Desired temperature to set in the room (in Celsius)",
},
},
"additionalProperties": False,
},
},
{
"name": "internet_search",
"description": "Performs an internet search using the given query",
"strict": True,
"parameters": {
"type": "object",
"required": ["query"],
"properties": {
"query": {"type": "string", "description": "The search query"}
},
"additionalProperties": False,
},
},
{
"name": "generate_image",
"description": "Image generation with the given prompt",
"strict": True,
"parameters": {
"type": "object",
"required": ["prompt"],
"properties": {
"prompt": {
"type": "string",
"description": "Textual description to generate the image",
}
},
"additionalProperties": False,
},
},
]
# Functions
@cl.step(name='Toggle Lights', type='tool')
async def toggleLights(room: str, state: str):
if room.lower() in rooms: return f'Lights in {room} turned {state}'
return 'Unknown room. Available rooms are '+', '.join(rooms)
@cl.step(name='Set Temperature', type='tool')
async def setTemperature(room: str, temp: float):
if room.lower() in rooms: return f'Set temperature in {room} to {temp}'
return 'Unknown room. Available Rooms are '+', '.join(rooms)
# Function Mapping
functionMapping = {
'set_temperature': setTemperature,
'toggle_lights' : toggleLights,
}
# Code
with open(r'D:\Dev\poly\instructions.txt') as f: instructions = f.read()
@cl.password_auth_callback
def authCallback(username: str, password: str):
return user.getUser(username, password)
@cl.on_chat_start
async def onChatStart():
authUser: cl.User = cl.user_session.get('user')
userConversation = user.getConversation(authUser.display_name)
messages = [{'role': 'system', 'content': instructions}]
for type, message in userConversation:
type = type.lower()
if type == 'bot': type = 'assistant'
messages += [{'role': type, 'content': message}]
await cl.Message(message, type=type+'_message').send()
cl.user_session.set('conversation', messages)
@cl.on_message
async def main(message: cl.Message):
try:
authUser: cl.User = cl.user_session.get('user')
conversation: list = cl.user_session.get('conversation')
conversation += [{'role': 'user', 'content': message.content}]
user.addMessage(authUser.display_name, message.content, 'user')
response: str = chat(
'qwen2.5:14b-instruct',
conversation,
tools=tools,
)['message']
await cl.Message(content=response['content']).send()
print(response)
if toolCalls := response.get('tool_calls'):
for toolCall in toolCalls:
toolName = toolCall['function']['name']
toolArgs = toolCall['function']['arguments']
toolOutput = globals()[toolName](**eval(toolArgs))
print('Tool output:', toolOutput)
conversation += ({
'role': 'tool',
'name': toolName,
'content': toolOutput
})
response: str = chat(
'qwen2.5:14b-instruct',
conversation,
tools=tools,
)['message']
await cl.Message(content=response['content']).send()
print(response)
user.addMessage(authUser.display_name, response['content'], 'bot')
print(response)
except Exception:
await cl.Message(content=f'An error occurred: {traceback.format_exc()}').send()
```
I am using a RTX 4070 12GB by the way. I don't know if that helps, but thought I would put it here just in case.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.4
|
{
"login": "Poly-Frag",
"id": 195621056,
"node_id": "U_kgDOC6jwwA",
"avatar_url": "https://avatars.githubusercontent.com/u/195621056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Poly-Frag",
"html_url": "https://github.com/Poly-Frag",
"followers_url": "https://api.github.com/users/Poly-Frag/followers",
"following_url": "https://api.github.com/users/Poly-Frag/following{/other_user}",
"gists_url": "https://api.github.com/users/Poly-Frag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Poly-Frag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Poly-Frag/subscriptions",
"organizations_url": "https://api.github.com/users/Poly-Frag/orgs",
"repos_url": "https://api.github.com/users/Poly-Frag/repos",
"events_url": "https://api.github.com/users/Poly-Frag/events{/privacy}",
"received_events_url": "https://api.github.com/users/Poly-Frag/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8588/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3826
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3826/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3826/comments
|
https://api.github.com/repos/ollama/ollama/issues/3826/events
|
https://github.com/ollama/ollama/issues/3826
| 2,256,927,799
|
I_kwDOJ0Z1Ps6Ghfw3
| 3,826
|
No unicode output on windows when redirecting stdout
|
{
"login": "xdanielc",
"id": 59803819,
"node_id": "MDQ6VXNlcjU5ODAzODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/59803819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xdanielc",
"html_url": "https://github.com/xdanielc",
"followers_url": "https://api.github.com/users/xdanielc/followers",
"following_url": "https://api.github.com/users/xdanielc/following{/other_user}",
"gists_url": "https://api.github.com/users/xdanielc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xdanielc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xdanielc/subscriptions",
"organizations_url": "https://api.github.com/users/xdanielc/orgs",
"repos_url": "https://api.github.com/users/xdanielc/repos",
"events_url": "https://api.github.com/users/xdanielc/events{/privacy}",
"received_events_url": "https://api.github.com/users/xdanielc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-04-22T16:02:48
| 2024-10-25T20:43:17
| 2024-10-25T20:43:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
On windows I can't output unicode symbols into a txt file, the console looks fine but when I do something like
```ollama run llama3 "Cuéntame una historia" > test.txt```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3826/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3826/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1078
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1078/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1078/comments
|
https://api.github.com/repos/ollama/ollama/issues/1078/events
|
https://github.com/ollama/ollama/pull/1078
| 1,988,338,211
|
PR_kwDOJ0Z1Ps5fLf4G
| 1,078
|
New big-AGI integration
|
{
"login": "enricoros",
"id": 32999,
"node_id": "MDQ6VXNlcjMyOTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/32999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enricoros",
"html_url": "https://github.com/enricoros",
"followers_url": "https://api.github.com/users/enricoros/followers",
"following_url": "https://api.github.com/users/enricoros/following{/other_user}",
"gists_url": "https://api.github.com/users/enricoros/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enricoros/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enricoros/subscriptions",
"organizations_url": "https://api.github.com/users/enricoros/orgs",
"repos_url": "https://api.github.com/users/enricoros/repos",
"events_url": "https://api.github.com/users/enricoros/events{/privacy}",
"received_events_url": "https://api.github.com/users/enricoros/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-10T20:26:09
| 2023-11-16T21:24:53
| 2023-11-13T21:59:00
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1078",
"html_url": "https://github.com/ollama/ollama/pull/1078",
"diff_url": "https://github.com/ollama/ollama/pull/1078.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1078.patch",
"merged_at": "2023-11-13T21:59:00"
}
|
Hi @jmorganca, we added Ollama support to big-AGI, which makes it easy to fetch, list models, generate text, chat, compare models, and even voice call, etc.

I've linked the configuration document directly.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1078/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5961
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5961/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5961/comments
|
https://api.github.com/repos/ollama/ollama/issues/5961/events
|
https://github.com/ollama/ollama/issues/5961
| 2,430,853,632
|
I_kwDOJ0Z1Ps6Q4-IA
| 5,961
|
Would it be possible to save chat history into season data?
|
{
"login": "mcDandy",
"id": 18588943,
"node_id": "MDQ6VXNlcjE4NTg4OTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/18588943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcDandy",
"html_url": "https://github.com/mcDandy",
"followers_url": "https://api.github.com/users/mcDandy/followers",
"following_url": "https://api.github.com/users/mcDandy/following{/other_user}",
"gists_url": "https://api.github.com/users/mcDandy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcDandy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcDandy/subscriptions",
"organizations_url": "https://api.github.com/users/mcDandy/orgs",
"repos_url": "https://api.github.com/users/mcDandy/repos",
"events_url": "https://api.github.com/users/mcDandy/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcDandy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-25T19:35:28
| 2024-09-04T01:59:13
| 2024-09-04T01:59:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was creating a story with AI but ollama crashed shortly after I saved seassion. Due to the fact I have only 12GB VRAM each token is taking forever (20s) using gemma2_27B (GPU barely utilised becouse of massive amounts of data being moved between RAM and VRAM). It would help me if I could just saved seasion with tokens sent between me and AI and could turn off Laptop for the night. File containing the season only contains settings. Only thing I found was `{"mediaType":"application/vnd.ollama.image.messages","digest":"sha256:f15e8d427ebb0dbab95d18481f910d293b1b1544aece22aceeff822063ce29e5","size":26616}` which do not contain messages or point to file which contain it. AI behaves as if the previous parts were not there and the prompt was new beginning. The story simply does not continue.
Laptop: MSI Vector GP68HX 13VH
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5961/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6881
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6881/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6881/comments
|
https://api.github.com/repos/ollama/ollama/issues/6881/events
|
https://github.com/ollama/ollama/issues/6881
| 2,537,221,186
|
I_kwDOJ0Z1Ps6XOuxC
| 6,881
|
pulling a big model can freeze the cpu
|
{
"login": "remco-pc",
"id": 8077908,
"node_id": "MDQ6VXNlcjgwNzc5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8077908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remco-pc",
"html_url": "https://github.com/remco-pc",
"followers_url": "https://api.github.com/users/remco-pc/followers",
"following_url": "https://api.github.com/users/remco-pc/following{/other_user}",
"gists_url": "https://api.github.com/users/remco-pc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remco-pc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remco-pc/subscriptions",
"organizations_url": "https://api.github.com/users/remco-pc/orgs",
"repos_url": "https://api.github.com/users/remco-pc/repos",
"events_url": "https://api.github.com/users/remco-pc/events{/privacy}",
"received_events_url": "https://api.github.com/users/remco-pc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-09-19T19:35:03
| 2024-10-23T12:10:27
| 2024-10-23T00:03:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
i had a cpu lock and when i was trying to type something through ssh the system crashed with cpu locks during a model download,
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
ollama version is 0.3.6
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6881/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2777
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2777/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2777/comments
|
https://api.github.com/repos/ollama/ollama/issues/2777/events
|
https://github.com/ollama/ollama/issues/2777
| 2,156,542,337
|
I_kwDOJ0Z1Ps6AijmB
| 2,777
|
Request: change the default sha256 hash file seperator for blobs to - (minus) instead of :
|
{
"login": "LumiWasTaken",
"id": 49376128,
"node_id": "MDQ6VXNlcjQ5Mzc2MTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/49376128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LumiWasTaken",
"html_url": "https://github.com/LumiWasTaken",
"followers_url": "https://api.github.com/users/LumiWasTaken/followers",
"following_url": "https://api.github.com/users/LumiWasTaken/following{/other_user}",
"gists_url": "https://api.github.com/users/LumiWasTaken/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LumiWasTaken/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LumiWasTaken/subscriptions",
"organizations_url": "https://api.github.com/users/LumiWasTaken/orgs",
"repos_url": "https://api.github.com/users/LumiWasTaken/repos",
"events_url": "https://api.github.com/users/LumiWasTaken/events{/privacy}",
"received_events_url": "https://api.github.com/users/LumiWasTaken/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-27T13:05:25
| 2024-03-13T16:30:53
| 2024-03-13T16:30:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Heya,
I have a setup where I'd like to use two systems, one testing on Windows and one active on Linux use the same path for model and manifest to save storage.
The issue I have is: Linux deployment creates blob files with the name `sha256:SHATHATHASBEENGENERATED` and on Windows `sha256-SHATHATHASBEENGENERATED` this makes their filenames incompatible with each other and troublesome on Windows since any file with a : inside it is considered "non-existent"
Desired resolution:
A Patch where it'll be backwards-compatible with files that have `:` but the new default to be `-` so it will even work when the storage is on a NTFS or exFAT file system.
Thank you!
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2777/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3041
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3041/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3041/comments
|
https://api.github.com/repos/ollama/ollama/issues/3041/events
|
https://github.com/ollama/ollama/issues/3041
| 2,177,751,710
|
I_kwDOJ0Z1Ps6Bzdqe
| 3,041
|
Models disappear when changing the OLLAMA_HOST to 0.0.0.0
|
{
"login": "Howe829",
"id": 28386363,
"node_id": "MDQ6VXNlcjI4Mzg2MzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/28386363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Howe829",
"html_url": "https://github.com/Howe829",
"followers_url": "https://api.github.com/users/Howe829/followers",
"following_url": "https://api.github.com/users/Howe829/following{/other_user}",
"gists_url": "https://api.github.com/users/Howe829/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Howe829/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Howe829/subscriptions",
"organizations_url": "https://api.github.com/users/Howe829/orgs",
"repos_url": "https://api.github.com/users/Howe829/repos",
"events_url": "https://api.github.com/users/Howe829/events{/privacy}",
"received_events_url": "https://api.github.com/users/Howe829/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2024-03-10T14:42:20
| 2024-03-13T03:04:06
| 2024-03-11T20:24:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi there! I am gonna let devices access ollama through LAN therefore I set the OLLAMA_HOST=0.0.0.0
When I restart ollama, the models I pulled before disappeared, I don't know whether it is a bug or something else.
In addition, I think we need a 'restart' command to restart the server.
Thanks for help in advance.
Ollama:0.1.28
My OS info: Ubuntu 23.04 Linux-6.2.0-39-generic
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3041/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8563
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8563/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8563/comments
|
https://api.github.com/repos/ollama/ollama/issues/8563/events
|
https://github.com/ollama/ollama/issues/8563
| 2,809,151,507
|
I_kwDOJ0Z1Ps6ncEAT
| 8,563
|
burn windows update at the stake
|
{
"login": "justquacks",
"id": 194549797,
"node_id": "U_kgDOC5iYJQ",
"avatar_url": "https://avatars.githubusercontent.com/u/194549797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justquacks",
"html_url": "https://github.com/justquacks",
"followers_url": "https://api.github.com/users/justquacks/followers",
"following_url": "https://api.github.com/users/justquacks/following{/other_user}",
"gists_url": "https://api.github.com/users/justquacks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justquacks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justquacks/subscriptions",
"organizations_url": "https://api.github.com/users/justquacks/orgs",
"repos_url": "https://api.github.com/users/justquacks/repos",
"events_url": "https://api.github.com/users/justquacks/events{/privacy}",
"received_events_url": "https://api.github.com/users/justquacks/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-24T10:57:57
| 2025-01-24T10:57:57
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Listen, Its 5am, I am tired, I dont want to be awake anymore, if this makes any sense or is logically possible, please do ANYTHING
I just posted [this](https://www.reddit.com/r/ollama/comments/1i8spym/recovering_lost_model_files_due_to_forced_windows/) to the subreddit
Hours and hours lost to windows update. **Please just automatically turn it off while downloading models.**
when downloading `net stop wuauserv` or whatever you need to do.
While writing the reddit post, I was again prompted to update windows. Mid recovery attempt.
WHEN WRITING THIS, I HAVE AGAIN BEEN PROMPTED TO UPDATE WINDOWS, AFTER I TOLD IT "I DONT WANT TO UPDATE"

(my server is on the wrong timezone for some reason. i dont care to fix it)
You have no idea the pain and rage that consumes me.
Please save others from knowing what I know........
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8563/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6546
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6546/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6546/comments
|
https://api.github.com/repos/ollama/ollama/issues/6546/events
|
https://github.com/ollama/ollama/pull/6546
| 2,492,997,463
|
PR_kwDOJ0Z1Ps55wd2b
| 6,546
|
fix(test): do not clobber models directory
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-28T21:11:17
| 2024-08-28T22:37:48
| 2024-08-28T22:37:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6546",
"html_url": "https://github.com/ollama/ollama/pull/6546",
"diff_url": "https://github.com/ollama/ollama/pull/6546.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6546.patch",
"merged_at": "2024-08-28T22:37:47"
}
|
these tests create these blobs in the user's blobs directory
```
sha256-4e06a933feaf7e21f9ea4442bec46e1a5c99f5931d06caecb5603653628ad9f1
sha256-5738747671c0649396ed0b138cf8daed5bdb7140df5ee18d0a520295045feac3
sha256-d45102d885e542798514de3c17c3d431732d90ff3bd5c743e411dcd93c7f8d1a
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6546/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7006
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7006/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7006/comments
|
https://api.github.com/repos/ollama/ollama/issues/7006/events
|
https://github.com/ollama/ollama/issues/7006
| 2,553,565,462
|
I_kwDOJ0Z1Ps6YNFEW
| 7,006
|
Ollama can't use my Nvidia GPU anymore?
|
{
"login": "pdavis68",
"id": 2781885,
"node_id": "MDQ6VXNlcjI3ODE4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2781885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdavis68",
"html_url": "https://github.com/pdavis68",
"followers_url": "https://api.github.com/users/pdavis68/followers",
"following_url": "https://api.github.com/users/pdavis68/following{/other_user}",
"gists_url": "https://api.github.com/users/pdavis68/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdavis68/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdavis68/subscriptions",
"organizations_url": "https://api.github.com/users/pdavis68/orgs",
"repos_url": "https://api.github.com/users/pdavis68/repos",
"events_url": "https://api.github.com/users/pdavis68/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdavis68/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-09-27T19:06:09
| 2024-10-16T14:37:03
| 2024-10-15T23:33:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm running Ollama with the following command:
`docker run --name ollama --gpus all -p 11434:11434 -e OLLAMA_DEBUG=1 -v ollama:/root/.ollama -d ollama/ollama:latest serve`
During startup, the logs are getting errors initing cudart (see logs at the end) and it's clearly not using the GPU.
From inside the container, if I run nvidia-smi, it sees my RTX 3050, so that has me confused.
```
/usr/bin/nvidia-smi
Fri Sep 27 18:29:47 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.52.01 Driver Version: 555.99 CUDA Version: 12.5 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3050 On | 00000000:01:00.0 On | N/A |
| 0% 45C P8 11W / 130W | 1408MiB / 8192MiB | 6% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 42 G /Xwayland N/A |
| 0 N/A N/A 44 G /Xwayland N/A |
+-----------------------------------------------------------------------------------------+
```
This is what nvidia-smi in the host returns:
```
Fri Sep 27 14:02:26 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.99 Driver Version: 555.99 CUDA Version: 12.5 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3050 WDDM | 00000000:01:00.0 On | N/A |
| 0% 42C P8 11W / 130W | 875MiB / 8192MiB | 5% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
... [a bunch of processes]
+-----------------------------------------------------------------------------------------+
```
It used to work. I periodically upgrade to the latest. I don't recall when it last worked with the GPU, I hadn't used it recently, but I noticed this morning it wasn't using the GPU, so I upgraded it with my normal set of upgrade commands:
```
docker pull ollama/ollama:latest
docker stop /ollama
docker rm /ollama
docker run --name ollama --gpus all -p 11434:11434 -v ollama:/root/.ollama -d ollama/ollama:latest serve
```
Complete logs:
```
2024-09-27 13:31:20 2024/09/27 18:31:20 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.684Z level=INFO source=images.go:753 msg="total blobs: 59"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.693Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.695Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 cpu cpu_avx cpu_avx2 cuda_v11]"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.700Z level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.701Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so*
2024-09-27 13:31:20 time=2024-09-27T18:31:20.701Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.718Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-linux-gnu/libcuda.so.1 /usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1]"
2024-09-27 13:31:20 cuInit err: 500
2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.1 error="cuda driver library init failure: 500"
2024-09-27 13:31:20 cuInit err: 500
2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1 error="cuda driver library init failure: 500"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcudart.so*
2024-09-27 13:31:20 time=2024-09-27T18:31:20.727Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.729Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]"
2024-09-27 13:31:20 cudaSetDevice err: 500
2024-09-27 13:31:20 time=2024-09-27T18:31:20.736Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.12.4.99 error="cudart init failure: 500"
2024-09-27 13:31:20 cudaSetDevice err: 500
2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.11.3.109 error="cudart init failure: 500"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
2024-09-27 13:31:20 time=2024-09-27T18:31:20.743Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="15.6 GiB" available="12.5 GiB"
2024-09-27 13:50:04 2024/09/27 18:50:04 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.245Z level=INFO source=images.go:753 msg="total blobs: 59"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.253Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.254Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.257Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.258Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.258Z level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.259Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so*
2024-09-27 13:50:04 time=2024-09-27T18:50:04.259Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.259Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-linux-gnu/libcuda.so.1 /usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1]"
2024-09-27 13:50:04 cuInit err: 500
2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.1 error="cuda driver library init failure: 500"
2024-09-27 13:50:04 cuInit err: 500
2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=INFO source=gpu.go:568 msg="unable to load cuda driver library" library=/usr/lib/wsl/drivers/nv_dispig.inf_amd64_cc569e59ca39c5fe/libcuda.so.1.1 error="cuda driver library init failure: 500"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcudart.so*
2024-09-27 13:50:04 time=2024-09-27T18:50:04.491Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.492Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]"
2024-09-27 13:50:04 cudaSetDevice err: 500
2024-09-27 13:50:04 time=2024-09-27T18:50:04.517Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.12.4.99 error="cudart init failure: 500"
2024-09-27 13:50:04 cudaSetDevice err: 500
2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=DEBUG source=gpu.go:537 msg="Unable to load cudart" library=/usr/lib/ollama/libcudart.so.11.3.109 error="cudart init failure: 500"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
2024-09-27 13:50:04 time=2024-09-27T18:50:04.522Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="15.6 GiB" available="13.9 GiB"
```
## Update
I did some digging on the error:
CUDA_ERROR_NOT_FOUND = 500
This indicates that a named symbol was not found. Examples of symbols are global/constant variable names, driver function names, texture names, and surface names.
Some kind of driver mismatch maybe? But hope that helps.
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7006/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/799
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/799/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/799/comments
|
https://api.github.com/repos/ollama/ollama/issues/799/events
|
https://github.com/ollama/ollama/pull/799
| 1,944,774,615
|
PR_kwDOJ0Z1Ps5c4ErG
| 799
|
Fix JSON Marshal Escaping for Special Characters
|
{
"login": "deichbewohner",
"id": 54838329,
"node_id": "MDQ6VXNlcjU0ODM4MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/54838329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deichbewohner",
"html_url": "https://github.com/deichbewohner",
"followers_url": "https://api.github.com/users/deichbewohner/followers",
"following_url": "https://api.github.com/users/deichbewohner/following{/other_user}",
"gists_url": "https://api.github.com/users/deichbewohner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deichbewohner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deichbewohner/subscriptions",
"organizations_url": "https://api.github.com/users/deichbewohner/orgs",
"repos_url": "https://api.github.com/users/deichbewohner/repos",
"events_url": "https://api.github.com/users/deichbewohner/events{/privacy}",
"received_events_url": "https://api.github.com/users/deichbewohner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-16T09:27:58
| 2023-10-17T15:46:03
| 2023-10-17T15:46:02
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/799",
"html_url": "https://github.com/ollama/ollama/pull/799",
"diff_url": "https://github.com/ollama/ollama/pull/799.diff",
"patch_url": "https://github.com/ollama/ollama/pull/799.patch",
"merged_at": "2023-10-17T15:46:02"
}
|
Fixed the json.Marshal() behavior in llama.go to prevent automatic escaping of special characters like < and >. This ensures templates with these characters are correctly represented in the JSON output. Addresses issue https://github.com/jmorganca/ollama/issues/798
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/799/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2967
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2967/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2967/comments
|
https://api.github.com/repos/ollama/ollama/issues/2967/events
|
https://github.com/ollama/ollama/issues/2967
| 2,172,792,607
|
I_kwDOJ0Z1Ps6Bgi8f
| 2,967
|
can't find model
|
{
"login": "wq57wq57",
"id": 76765038,
"node_id": "MDQ6VXNlcjc2NzY1MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/76765038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wq57wq57",
"html_url": "https://github.com/wq57wq57",
"followers_url": "https://api.github.com/users/wq57wq57/followers",
"following_url": "https://api.github.com/users/wq57wq57/following{/other_user}",
"gists_url": "https://api.github.com/users/wq57wq57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wq57wq57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wq57wq57/subscriptions",
"organizations_url": "https://api.github.com/users/wq57wq57/orgs",
"repos_url": "https://api.github.com/users/wq57wq57/repos",
"events_url": "https://api.github.com/users/wq57wq57/events{/privacy}",
"received_events_url": "https://api.github.com/users/wq57wq57/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-07T02:05:13
| 2024-03-07T02:07:57
| 2024-03-07T02:07:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "wq57wq57",
"id": 76765038,
"node_id": "MDQ6VXNlcjc2NzY1MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/76765038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wq57wq57",
"html_url": "https://github.com/wq57wq57",
"followers_url": "https://api.github.com/users/wq57wq57/followers",
"following_url": "https://api.github.com/users/wq57wq57/following{/other_user}",
"gists_url": "https://api.github.com/users/wq57wq57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wq57wq57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wq57wq57/subscriptions",
"organizations_url": "https://api.github.com/users/wq57wq57/orgs",
"repos_url": "https://api.github.com/users/wq57wq57/repos",
"events_url": "https://api.github.com/users/wq57wq57/events{/privacy}",
"received_events_url": "https://api.github.com/users/wq57wq57/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2967/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/589
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/589/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/589/comments
|
https://api.github.com/repos/ollama/ollama/issues/589/events
|
https://github.com/ollama/ollama/pull/589
| 1,911,943,265
|
PR_kwDOJ0Z1Ps5bJewl
| 589
|
update install.sh
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-25T17:05:11
| 2023-09-25T18:08:26
| 2023-09-25T18:08:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/589",
"html_url": "https://github.com/ollama/ollama/pull/589",
"diff_url": "https://github.com/ollama/ollama/pull/589.diff",
"patch_url": "https://github.com/ollama/ollama/pull/589.patch",
"merged_at": "2023-09-25T18:08:25"
}
|
minor changes to warnings/errors
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/589/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/51
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/51/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/51/comments
|
https://api.github.com/repos/ollama/ollama/issues/51/events
|
https://github.com/ollama/ollama/pull/51
| 1,792,490,639
|
PR_kwDOJ0Z1Ps5U3Gck
| 51
|
more free
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-07T00:08:18
| 2023-07-07T00:13:32
| 2023-07-07T00:13:28
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/51",
"html_url": "https://github.com/ollama/ollama/pull/51",
"diff_url": "https://github.com/ollama/ollama/pull/51.diff",
"patch_url": "https://github.com/ollama/ollama/pull/51.patch",
"merged_at": "2023-07-07T00:13:28"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/51/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/51/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4792
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4792/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4792/comments
|
https://api.github.com/repos/ollama/ollama/issues/4792/events
|
https://github.com/ollama/ollama/issues/4792
| 2,330,057,822
|
I_kwDOJ0Z1Ps6K4dxe
| 4,792
|
Added the ability to configure maxvram per GPU when there are multiple GPUs
|
{
"login": "hamkido",
"id": 43724352,
"node_id": "MDQ6VXNlcjQzNzI0MzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/43724352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamkido",
"html_url": "https://github.com/hamkido",
"followers_url": "https://api.github.com/users/hamkido/followers",
"following_url": "https://api.github.com/users/hamkido/following{/other_user}",
"gists_url": "https://api.github.com/users/hamkido/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamkido/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamkido/subscriptions",
"organizations_url": "https://api.github.com/users/hamkido/orgs",
"repos_url": "https://api.github.com/users/hamkido/repos",
"events_url": "https://api.github.com/users/hamkido/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamkido/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-06-03T02:54:11
| 2024-07-03T23:23:13
| 2024-07-03T23:23:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Current can config MaxVRAM, but can not config per gpu maxvram when have dual gpu.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4792/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2280
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2280/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2280/comments
|
https://api.github.com/repos/ollama/ollama/issues/2280/events
|
https://github.com/ollama/ollama/issues/2280
| 2,108,408,779
|
I_kwDOJ0Z1Ps59q8PL
| 2,280
|
MacOS Ollama fresh install won't actually open
|
{
"login": "recoi1er",
"id": 19810634,
"node_id": "MDQ6VXNlcjE5ODEwNjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/19810634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/recoi1er",
"html_url": "https://github.com/recoi1er",
"followers_url": "https://api.github.com/users/recoi1er/followers",
"following_url": "https://api.github.com/users/recoi1er/following{/other_user}",
"gists_url": "https://api.github.com/users/recoi1er/gists{/gist_id}",
"starred_url": "https://api.github.com/users/recoi1er/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/recoi1er/subscriptions",
"organizations_url": "https://api.github.com/users/recoi1er/orgs",
"repos_url": "https://api.github.com/users/recoi1er/repos",
"events_url": "https://api.github.com/users/recoi1er/events{/privacy}",
"received_events_url": "https://api.github.com/users/recoi1er/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-01-30T17:38:00
| 2025-01-30T03:12:56
| 2024-04-15T19:08:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Just installed freshly downloaded Ollama install, got through the installation but afterwards nothing opens or happens. The icon on my dock shows it as closed (no dot underneath). No GUI. If I try to delete it, MacOS says it cannot because it's open. I can see it in Activity Monitor and end the task but trying to reopen after still results in nothing, after force quitting I can delete the app and reinstall which results in the same experience. Restarted Mac as well and deleted the app and reinstalled.
MacOS: 14.3
Ollama: whatever version is current off your website
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2280/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2280/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/845
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/845/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/845/comments
|
https://api.github.com/repos/ollama/ollama/issues/845/events
|
https://github.com/ollama/ollama/issues/845
| 1,952,876,936
|
I_kwDOJ0Z1Ps50ZomI
| 845
|
Ollama Docker: Error LLama runner process has terminated
|
{
"login": "randywreed",
"id": 5059871,
"node_id": "MDQ6VXNlcjUwNTk4NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5059871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/randywreed",
"html_url": "https://github.com/randywreed",
"followers_url": "https://api.github.com/users/randywreed/followers",
"following_url": "https://api.github.com/users/randywreed/following{/other_user}",
"gists_url": "https://api.github.com/users/randywreed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/randywreed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/randywreed/subscriptions",
"organizations_url": "https://api.github.com/users/randywreed/orgs",
"repos_url": "https://api.github.com/users/randywreed/repos",
"events_url": "https://api.github.com/users/randywreed/events{/privacy}",
"received_events_url": "https://api.github.com/users/randywreed/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 41
| 2023-10-19T18:56:19
| 2025-01-08T05:16:49
| 2023-10-30T22:08:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm running the latest docker version of ollama (as of 10/19/2023). When I do docker exec -it ollama ollama run mistral
I get the error Error Llama runner process has terminated.
The docker does not have .ollama/logs directory and journalctl is not installed.
Inside the docker it seems to have plenty of space, and free -m reports it has 127gb of ram available
Any help would be appreciated.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/845/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/845/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6984
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6984/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6984/comments
|
https://api.github.com/repos/ollama/ollama/issues/6984/events
|
https://github.com/ollama/ollama/issues/6984
| 2,551,243,280
|
I_kwDOJ0Z1Ps6YEOIQ
| 6,984
|
It can't use more than 64 threads on Windows
|
{
"login": "NektoDron",
"id": 26095298,
"node_id": "MDQ6VXNlcjI2MDk1Mjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26095298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NektoDron",
"html_url": "https://github.com/NektoDron",
"followers_url": "https://api.github.com/users/NektoDron/followers",
"following_url": "https://api.github.com/users/NektoDron/following{/other_user}",
"gists_url": "https://api.github.com/users/NektoDron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NektoDron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NektoDron/subscriptions",
"organizations_url": "https://api.github.com/users/NektoDron/orgs",
"repos_url": "https://api.github.com/users/NektoDron/repos",
"events_url": "https://api.github.com/users/NektoDron/events{/privacy}",
"received_events_url": "https://api.github.com/users/NektoDron/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-09-26T18:12:40
| 2024-09-26T19:02:06
| 2024-09-26T19:02:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
It can't use more than 64 threads on Windows, only one numa node is used.
### OS
Windows
### GPU
Other
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6984/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5485
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5485/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5485/comments
|
https://api.github.com/repos/ollama/ollama/issues/5485/events
|
https://github.com/ollama/ollama/pull/5485
| 2,391,057,691
|
PR_kwDOJ0Z1Ps50dMeR
| 5,485
|
Update api.md
|
{
"login": "chyok",
"id": 32629225,
"node_id": "MDQ6VXNlcjMyNjI5MjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/32629225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chyok",
"html_url": "https://github.com/chyok",
"followers_url": "https://api.github.com/users/chyok/followers",
"following_url": "https://api.github.com/users/chyok/following{/other_user}",
"gists_url": "https://api.github.com/users/chyok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chyok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chyok/subscriptions",
"organizations_url": "https://api.github.com/users/chyok/orgs",
"repos_url": "https://api.github.com/users/chyok/repos",
"events_url": "https://api.github.com/users/chyok/events{/privacy}",
"received_events_url": "https://api.github.com/users/chyok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-04T14:59:47
| 2024-07-14T06:53:40
| 2024-07-14T06:53:40
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5485",
"html_url": "https://github.com/ollama/ollama/pull/5485",
"diff_url": "https://github.com/ollama/ollama/pull/5485.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5485.patch",
"merged_at": null
}
|
Update the documentation according to the latest API.
`curl http://localhost:11434/v1/tags`
```json
{
"models": [
{
"name": "codellama:latest",
"model": "codellama:latest",
"modified_at": "2024-07-04T22:48:10.8105706+08:00",
"size": 3825910662,
"digest": "8fdf8f752f6e80de33e82f381aba784c025982752cd1ae9377add66449d2225f",
"details": {
"parent_model": "",
"format": "gguf",
"family": "llama",
"families": null,
"parameter_size": "7B",
"quantization_level": "Q4_0"
}
},
{
"name": "gemma2:27b",
"model": "gemma2:27b",
"modified_at": "2024-07-04T22:13:56.6074799+08:00",
"size": 15628387569,
"digest": "371038893ee3aeecdd361850ba3a13c3f1f08f5e0c448ac11927ea15809b2b6b",
"details": {
"parent_model": "",
"format": "gguf",
"family": "gemma2",
"families": [
"gemma2"
],
"parameter_size": "27.2B",
"quantization_level": "Q4_0"
}
},
{
"name": "gemma2:latest",
"model": "gemma2:latest",
"modified_at": "2024-06-28T20:51:21.2480459+08:00",
"size": 5453010625,
"digest": "6008d85d064649fd1980f730982f767b09d30848d00248fc51cb7d53536504de",
"details": {
"parent_model": "",
"format": "gguf",
"family": "gemma2",
"families": [
"gemma2"
],
"parameter_size": "9.2B",
"quantization_level": "Q4_0"
}
},
{
"name": "llama3:latest",
"model": "llama3:latest",
"modified_at": "2024-06-15T19:51:19.4751104+08:00",
"size": 4661224676,
"digest": "365c0bd3c000a25d28ddbf732fe1c6add414de7275464c4e4d1c3b5fcb5d8ad1",
"details": {
"parent_model": "",
"format": "gguf",
"family": "llama",
"families": [
"llama"
],
"parameter_size": "8.0B",
"quantization_level": "Q4_0"
}
}
]
}
```
|
{
"login": "chyok",
"id": 32629225,
"node_id": "MDQ6VXNlcjMyNjI5MjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/32629225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chyok",
"html_url": "https://github.com/chyok",
"followers_url": "https://api.github.com/users/chyok/followers",
"following_url": "https://api.github.com/users/chyok/following{/other_user}",
"gists_url": "https://api.github.com/users/chyok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chyok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chyok/subscriptions",
"organizations_url": "https://api.github.com/users/chyok/orgs",
"repos_url": "https://api.github.com/users/chyok/repos",
"events_url": "https://api.github.com/users/chyok/events{/privacy}",
"received_events_url": "https://api.github.com/users/chyok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5485/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5782
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5782/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5782/comments
|
https://api.github.com/repos/ollama/ollama/issues/5782/events
|
https://github.com/ollama/ollama/pull/5782
| 2,417,328,500
|
PR_kwDOJ0Z1Ps51010Z
| 5,782
|
Add hermes-2-pro-llama-3 to the testing matrix
|
{
"login": "andreibondarev",
"id": 541665,
"node_id": "MDQ6VXNlcjU0MTY2NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/541665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andreibondarev",
"html_url": "https://github.com/andreibondarev",
"followers_url": "https://api.github.com/users/andreibondarev/followers",
"following_url": "https://api.github.com/users/andreibondarev/following{/other_user}",
"gists_url": "https://api.github.com/users/andreibondarev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andreibondarev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreibondarev/subscriptions",
"organizations_url": "https://api.github.com/users/andreibondarev/orgs",
"repos_url": "https://api.github.com/users/andreibondarev/repos",
"events_url": "https://api.github.com/users/andreibondarev/events{/privacy}",
"received_events_url": "https://api.github.com/users/andreibondarev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2024-07-18T20:24:45
| 2024-09-04T13:44:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5782",
"html_url": "https://github.com/ollama/ollama/pull/5782",
"diff_url": "https://github.com/ollama/ollama/pull/5782.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5782.patch",
"merged_at": null
}
|
Adding `interstellarninja/hermes-2-pro-llama-3-8b` to the testing matrix.
I'm not 100% sure whether the template is completely identical to the `llama3-groq-tool-use` since some of the tokens are different, specifically: `bos_token`, `eos_token`, and `pad_token` across the 2 models: [Groq/Llama-3-Groq-8B-Tool-Use/blob/main/tokenizer_config.json](https://huggingface.co/Groq/Llama-3-Groq-8B-Tool-Use/blob/main/tokenizer_config.json#L2108-L2117) and [NousResearch/Hermes-2-Theta-Llama-3-8B/blob/main/tokenizer_config.json](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B/blob/main/tokenizer_config.json#L2060-L2078).
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5782/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5782/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1929
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1929/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1929/comments
|
https://api.github.com/repos/ollama/ollama/issues/1929/events
|
https://github.com/ollama/ollama/issues/1929
| 2,077,167,556
|
I_kwDOJ0Z1Ps57zw_E
| 1,929
|
WARNING: No NVIDIA GPU detected. Ollama will run in CPU-only mode.
|
{
"login": "xzkxzk12301230",
"id": 18141650,
"node_id": "MDQ6VXNlcjE4MTQxNjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/18141650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xzkxzk12301230",
"html_url": "https://github.com/xzkxzk12301230",
"followers_url": "https://api.github.com/users/xzkxzk12301230/followers",
"following_url": "https://api.github.com/users/xzkxzk12301230/following{/other_user}",
"gists_url": "https://api.github.com/users/xzkxzk12301230/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xzkxzk12301230/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xzkxzk12301230/subscriptions",
"organizations_url": "https://api.github.com/users/xzkxzk12301230/orgs",
"repos_url": "https://api.github.com/users/xzkxzk12301230/repos",
"events_url": "https://api.github.com/users/xzkxzk12301230/events{/privacy}",
"received_events_url": "https://api.github.com/users/xzkxzk12301230/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-01-11T16:57:15
| 2024-02-19T19:49:37
| 2024-02-19T19:49:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
i use wsl2,and GPU information is as follows. when i install ollama,it WARNING: No NVIDIA GPU detected. Ollama will run in CPU-only mode.
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 546.33 Driver Version: 546.33 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4060 Ti WDDM | 00000000:03:00.0 On | N/A |
| 0% 29C P8 7W / 180W | 581MiB / 16380MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1929/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7460
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7460/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7460/comments
|
https://api.github.com/repos/ollama/ollama/issues/7460/events
|
https://github.com/ollama/ollama/issues/7460
| 2,628,547,667
|
I_kwDOJ0Z1Ps6crHRT
| 7,460
|
Windows installer breaks when WSL install was previously used
|
{
"login": "TangentFoxy",
"id": 15093567,
"node_id": "MDQ6VXNlcjE1MDkzNTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/15093567?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TangentFoxy",
"html_url": "https://github.com/TangentFoxy",
"followers_url": "https://api.github.com/users/TangentFoxy/followers",
"following_url": "https://api.github.com/users/TangentFoxy/following{/other_user}",
"gists_url": "https://api.github.com/users/TangentFoxy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TangentFoxy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TangentFoxy/subscriptions",
"organizations_url": "https://api.github.com/users/TangentFoxy/orgs",
"repos_url": "https://api.github.com/users/TangentFoxy/repos",
"events_url": "https://api.github.com/users/TangentFoxy/events{/privacy}",
"received_events_url": "https://api.github.com/users/TangentFoxy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6677675697,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgU-sQ",
"url": "https://api.github.com/repos/ollama/ollama/labels/wsl",
"name": "wsl",
"color": "7E0821",
"default": false,
"description": "Issues using WSL"
},
{
"id": 6678628138,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjhPHKg",
"url": "https://api.github.com/repos/ollama/ollama/labels/install",
"name": "install",
"color": "E0B88D",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-11-01T07:54:12
| 2024-11-02T09:16:25
| 2024-11-02T09:16:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After discovering that Ollama was not automatically updating, I ran the installer to upgrade it (per FAQ instructions). Now a 500 error is generated if I try to access it, and the only logs and config available don't show anything useful? Additionally, the `ollama` command is not available.

No additional info provided..
### app.log
```
time=2024-11-01T01:44:12.158-06:00 level=INFO source=logging.go:50 msg="ollama app started"
time=2024-11-01T01:44:12.381-06:00 level=INFO source=store.go:96 msg="wrote store: C:\\Users\\username\\AppData\\Local\\Ollama\\config.json"
time=2024-11-01T01:44:12.395-06:00 level=INFO source=store.go:96 msg="wrote store: C:\\Users\\username\\AppData\\Local\\Ollama\\config.json"
time=2024-11-01T01:44:12.400-06:00 level=INFO source=server.go:176 msg="unable to connect to server"
time=2024-11-01T01:44:12.400-06:00 level=INFO source=server.go:135 msg="starting server..."
time=2024-11-01T01:44:12.715-06:00 level=INFO source=server.go:121 msg="started ollama server with pid 19228"
time=2024-11-01T01:44:12.715-06:00 level=INFO source=server.go:123 msg="ollama server logs C:\\Users\\username\\AppData\\Local\\Ollama\\server.log"
```
### config.json
```json
{"id":"833b6b29-adbe-422e-98ec-5c78824cc785","first-time-run":true}
```
### server.log
Is an empty file.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
latest - I cannot say the *exact* version because it did not work
|
{
"login": "TangentFoxy",
"id": 15093567,
"node_id": "MDQ6VXNlcjE1MDkzNTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/15093567?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TangentFoxy",
"html_url": "https://github.com/TangentFoxy",
"followers_url": "https://api.github.com/users/TangentFoxy/followers",
"following_url": "https://api.github.com/users/TangentFoxy/following{/other_user}",
"gists_url": "https://api.github.com/users/TangentFoxy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TangentFoxy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TangentFoxy/subscriptions",
"organizations_url": "https://api.github.com/users/TangentFoxy/orgs",
"repos_url": "https://api.github.com/users/TangentFoxy/repos",
"events_url": "https://api.github.com/users/TangentFoxy/events{/privacy}",
"received_events_url": "https://api.github.com/users/TangentFoxy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7460/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2504
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2504/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2504/comments
|
https://api.github.com/repos/ollama/ollama/issues/2504/events
|
https://github.com/ollama/ollama/pull/2504
| 2,135,331,720
|
PR_kwDOJ0Z1Ps5m6ayx
| 2,504
|
Update README.md with new macOS app
|
{
"login": "gluonfield",
"id": 5672094,
"node_id": "MDQ6VXNlcjU2NzIwOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5672094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gluonfield",
"html_url": "https://github.com/gluonfield",
"followers_url": "https://api.github.com/users/gluonfield/followers",
"following_url": "https://api.github.com/users/gluonfield/following{/other_user}",
"gists_url": "https://api.github.com/users/gluonfield/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gluonfield/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gluonfield/subscriptions",
"organizations_url": "https://api.github.com/users/gluonfield/orgs",
"repos_url": "https://api.github.com/users/gluonfield/repos",
"events_url": "https://api.github.com/users/gluonfield/events{/privacy}",
"received_events_url": "https://api.github.com/users/gluonfield/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-14T22:49:55
| 2024-02-22T18:09:29
| 2024-02-22T18:09:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2504",
"html_url": "https://github.com/ollama/ollama/pull/2504",
"diff_url": "https://github.com/ollama/ollama/pull/2504.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2504.patch",
"merged_at": "2024-02-22T18:09:29"
}
|
- Enchanted is now supported for desktop on macOS
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2504/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2268
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2268/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2268/comments
|
https://api.github.com/repos/ollama/ollama/issues/2268/events
|
https://github.com/ollama/ollama/issues/2268
| 2,107,247,067
|
I_kwDOJ0Z1Ps59mgnb
| 2,268
|
BUG: updating ollama per curl, overwrites the manually edited `/etc/systemd/system/ollama.service`
|
{
"login": "BananaAcid",
"id": 1894723,
"node_id": "MDQ6VXNlcjE4OTQ3MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1894723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BananaAcid",
"html_url": "https://github.com/BananaAcid",
"followers_url": "https://api.github.com/users/BananaAcid/followers",
"following_url": "https://api.github.com/users/BananaAcid/following{/other_user}",
"gists_url": "https://api.github.com/users/BananaAcid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BananaAcid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BananaAcid/subscriptions",
"organizations_url": "https://api.github.com/users/BananaAcid/orgs",
"repos_url": "https://api.github.com/users/BananaAcid/repos",
"events_url": "https://api.github.com/users/BananaAcid/events{/privacy}",
"received_events_url": "https://api.github.com/users/BananaAcid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-01-30T08:39:48
| 2024-03-11T22:09:40
| 2024-03-11T22:09:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
After updating using `curl https://ollama.ai/install.sh | sh`
the service file `/etc/systemd/system/ollama.service` gets overwritten.
Loosing all `Environment=OLLAMA...` changes.
Maybe check if it exists first, and not overwrite it.
**-- there seems to be no notice about it overwriting in the docs.**
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2268/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4746
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4746/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4746/comments
|
https://api.github.com/repos/ollama/ollama/issues/4746/events
|
https://github.com/ollama/ollama/pull/4746
| 2,327,170,024
|
PR_kwDOJ0Z1Ps5xF1oI
| 4,746
|
server: try github.com/minio/sha256-simd
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-31T07:43:08
| 2024-06-05T22:21:03
| 2024-06-05T22:21:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4746",
"html_url": "https://github.com/ollama/ollama/pull/4746",
"diff_url": "https://github.com/ollama/ollama/pull/4746.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4746.patch",
"merged_at": null
}
|
This is an experimental change to see if sha256-simd is faster than the standard library's sha256 implementation. It is not yet clear if this will be a net win, but it is worth trying.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4746/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4995
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4995/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4995/comments
|
https://api.github.com/repos/ollama/ollama/issues/4995/events
|
https://github.com/ollama/ollama/issues/4995
| 2,347,979,566
|
I_kwDOJ0Z1Ps6L81Mu
| 4,995
|
Ollama GPU not loding properly
|
{
"login": "tankvpython",
"id": 140445384,
"node_id": "U_kgDOCF8GyA",
"avatar_url": "https://avatars.githubusercontent.com/u/140445384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tankvpython",
"html_url": "https://github.com/tankvpython",
"followers_url": "https://api.github.com/users/tankvpython/followers",
"following_url": "https://api.github.com/users/tankvpython/following{/other_user}",
"gists_url": "https://api.github.com/users/tankvpython/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tankvpython/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tankvpython/subscriptions",
"organizations_url": "https://api.github.com/users/tankvpython/orgs",
"repos_url": "https://api.github.com/users/tankvpython/repos",
"events_url": "https://api.github.com/users/tankvpython/events{/privacy}",
"received_events_url": "https://api.github.com/users/tankvpython/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-06-12T07:01:48
| 2024-09-05T22:12:04
| 2024-09-05T22:12:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am facing an issue with the Ollama service. I have an **RTX 4090 GPU** with 80GB of RAM and 24GB of VRAM. When I run the Llama 3 70B model and ask it a question, it initially loads on the GPU, but after 5-10 seconds, it shifts entirely to the CPU. This causes the response time to be slow. Please provide me with a solution for this. Thank you in advance.
Note:- GPU load is 6-12 % and CPU load is 70% .
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
v0.1.43
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4995/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/775
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/775/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/775/comments
|
https://api.github.com/repos/ollama/ollama/issues/775/events
|
https://github.com/ollama/ollama/issues/775
| 1,941,367,363
|
I_kwDOJ0Z1Ps5ztupD
| 775
|
How to create my own model with GGUF file??
|
{
"login": "wertyac",
"id": 5260412,
"node_id": "MDQ6VXNlcjUyNjA0MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5260412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wertyac",
"html_url": "https://github.com/wertyac",
"followers_url": "https://api.github.com/users/wertyac/followers",
"following_url": "https://api.github.com/users/wertyac/following{/other_user}",
"gists_url": "https://api.github.com/users/wertyac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wertyac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wertyac/subscriptions",
"organizations_url": "https://api.github.com/users/wertyac/orgs",
"repos_url": "https://api.github.com/users/wertyac/repos",
"events_url": "https://api.github.com/users/wertyac/events{/privacy}",
"received_events_url": "https://api.github.com/users/wertyac/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2023-10-13T07:04:39
| 2024-09-05T14:34:54
| 2023-10-24T00:45:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I follow the instructions to create my own model. Bu failed.
The steps are as following:
1) I have a own gguf file in /opt/cllama2-13b-16k/chinese-alpaca-2-13b-16k.Q4_0.gguf .
2) And I create the Modefile in /opt/cllama2-13b-16k:
################################
FROM /opt/cllama2-13b-16k/chinese-alpaca-2-13b-16k.Q4_0.gguf
3) I create my own cllama2 with:
root@144server:/opt/cllama2-13b-16k# ollama create cllama2-13b-16k -f ./Modelfile
parsing modelfile
looking for model
pulling model file
⠧ pulling manifest Error: pull model manifest: model not found
How to create my own model?? It seems the instructions are not OK.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/775/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6020
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6020/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6020/comments
|
https://api.github.com/repos/ollama/ollama/issues/6020/events
|
https://github.com/ollama/ollama/issues/6020
| 2,433,657,827
|
I_kwDOJ0Z1Ps6RDqvj
| 6,020
|
not utilizing ram after vram
|
{
"login": "uploadsjuicers",
"id": 176449026,
"node_id": "U_kgDOCoRmAg",
"avatar_url": "https://avatars.githubusercontent.com/u/176449026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uploadsjuicers",
"html_url": "https://github.com/uploadsjuicers",
"followers_url": "https://api.github.com/users/uploadsjuicers/followers",
"following_url": "https://api.github.com/users/uploadsjuicers/following{/other_user}",
"gists_url": "https://api.github.com/users/uploadsjuicers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uploadsjuicers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uploadsjuicers/subscriptions",
"organizations_url": "https://api.github.com/users/uploadsjuicers/orgs",
"repos_url": "https://api.github.com/users/uploadsjuicers/repos",
"events_url": "https://api.github.com/users/uploadsjuicers/events{/privacy}",
"received_events_url": "https://api.github.com/users/uploadsjuicers/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-07-27T23:27:17
| 2024-07-28T02:15:38
| 2024-07-28T02:15:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am running ollama in docker with an nvidia gpu. When I load a model that is larger than the 8gb of vram my gpu has, my ram usage doesn't increase, though the model does respond. I am assuming it is using mmap instead of ram. Is this intended or is there a way to configure it to use ram?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.0
|
{
"login": "uploadsjuicers",
"id": 176449026,
"node_id": "U_kgDOCoRmAg",
"avatar_url": "https://avatars.githubusercontent.com/u/176449026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uploadsjuicers",
"html_url": "https://github.com/uploadsjuicers",
"followers_url": "https://api.github.com/users/uploadsjuicers/followers",
"following_url": "https://api.github.com/users/uploadsjuicers/following{/other_user}",
"gists_url": "https://api.github.com/users/uploadsjuicers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uploadsjuicers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uploadsjuicers/subscriptions",
"organizations_url": "https://api.github.com/users/uploadsjuicers/orgs",
"repos_url": "https://api.github.com/users/uploadsjuicers/repos",
"events_url": "https://api.github.com/users/uploadsjuicers/events{/privacy}",
"received_events_url": "https://api.github.com/users/uploadsjuicers/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6020/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6191
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6191/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6191/comments
|
https://api.github.com/repos/ollama/ollama/issues/6191/events
|
https://github.com/ollama/ollama/issues/6191
| 2,449,696,811
|
I_kwDOJ0Z1Ps6SA2gr
| 6,191
|
Custom models without reasoning using gpu
|
{
"login": "CyanMystery",
"id": 51013526,
"node_id": "MDQ6VXNlcjUxMDEzNTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/51013526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CyanMystery",
"html_url": "https://github.com/CyanMystery",
"followers_url": "https://api.github.com/users/CyanMystery/followers",
"following_url": "https://api.github.com/users/CyanMystery/following{/other_user}",
"gists_url": "https://api.github.com/users/CyanMystery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CyanMystery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CyanMystery/subscriptions",
"organizations_url": "https://api.github.com/users/CyanMystery/orgs",
"repos_url": "https://api.github.com/users/CyanMystery/repos",
"events_url": "https://api.github.com/users/CyanMystery/events{/privacy}",
"received_events_url": "https://api.github.com/users/CyanMystery/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-08-06T00:31:21
| 2024-09-02T23:31:31
| 2024-09-02T23:31:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I downloaded this model **https://www.modelscope.cn/models/LLM-Research/Mistral-7B-Instruct-v0.3-GGUF/files** Create from modelfiles but do not use GPU reasoning, direct ollama run mistral can be used normally
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.0
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6191/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6607
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6607/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6607/comments
|
https://api.github.com/repos/ollama/ollama/issues/6607/events
|
https://github.com/ollama/ollama/issues/6607
| 2,503,073,408
|
I_kwDOJ0Z1Ps6VMd6A
| 6,607
|
docker image for rocm-3.5.1 to run on older AMD gpus
|
{
"login": "drhboss",
"id": 61165013,
"node_id": "MDQ6VXNlcjYxMTY1MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/61165013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drhboss",
"html_url": "https://github.com/drhboss",
"followers_url": "https://api.github.com/users/drhboss/followers",
"following_url": "https://api.github.com/users/drhboss/following{/other_user}",
"gists_url": "https://api.github.com/users/drhboss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drhboss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drhboss/subscriptions",
"organizations_url": "https://api.github.com/users/drhboss/orgs",
"repos_url": "https://api.github.com/users/drhboss/repos",
"events_url": "https://api.github.com/users/drhboss/events{/privacy}",
"received_events_url": "https://api.github.com/users/drhboss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-09-03T14:43:00
| 2024-09-03T20:57:15
| 2024-09-03T20:57:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
any chance an image with rocm-3.5.1 can be prepared for older gpus, i.e rx580?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6607/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1740
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1740/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1740/comments
|
https://api.github.com/repos/ollama/ollama/issues/1740/events
|
https://github.com/ollama/ollama/issues/1740
| 2,060,403,352
|
I_kwDOJ0Z1Ps56z0KY
| 1,740
|
PowerInfer Enhancement
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-29T21:11:09
| 2023-12-29T21:23:21
| 2023-12-29T21:23:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I keep seeing posts about powerinfer https://github.com/SJTU-IPADS/PowerInfer which (if I understand it) keeps often used terms in gpu memory and seldom used terms in cpu memory. This results in an 11x speed up.
It looks like models need to be updated to use this, so it's a pain. BUT.... 11x speed up.
I wonder if the model could be updated automatically so after download it revises it, and stores it.
Anyways, just for interest sake. **11X speed up!!!**
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1740/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1217
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1217/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1217/comments
|
https://api.github.com/repos/ollama/ollama/issues/1217/events
|
https://github.com/ollama/ollama/issues/1217
| 2,003,780,364
|
I_kwDOJ0Z1Ps53b0MM
| 1,217
|
\n Modelfile and path deprecated?
|
{
"login": "Luxadevi",
"id": 116653852,
"node_id": "U_kgDOBvP_HA",
"avatar_url": "https://avatars.githubusercontent.com/u/116653852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luxadevi",
"html_url": "https://github.com/Luxadevi",
"followers_url": "https://api.github.com/users/Luxadevi/followers",
"following_url": "https://api.github.com/users/Luxadevi/following{/other_user}",
"gists_url": "https://api.github.com/users/Luxadevi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Luxadevi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luxadevi/subscriptions",
"organizations_url": "https://api.github.com/users/Luxadevi/orgs",
"repos_url": "https://api.github.com/users/Luxadevi/repos",
"events_url": "https://api.github.com/users/Luxadevi/events{/privacy}",
"received_events_url": "https://api.github.com/users/Luxadevi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-11-21T08:59:31
| 2023-11-21T21:59:26
| 2023-11-21T21:59:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
Could you tell me why you removed the "path" parameter from the /api/create endpoint?
And why the replacement of parsing the whole modelfile in the curl command?
Really struggeling to set up some python logic where \n is the escape character for a new line and it goes completly mental on it.
Would it be possible to have different syntaxing for this?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1217/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4913
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4913/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4913/comments
|
https://api.github.com/repos/ollama/ollama/issues/4913/events
|
https://github.com/ollama/ollama/issues/4913
| 2,340,856,828
|
I_kwDOJ0Z1Ps6LhqP8
| 4,913
|
Ollama model download fails on kubernetes
|
{
"login": "samyIO",
"id": 65492678,
"node_id": "MDQ6VXNlcjY1NDkyNjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/65492678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samyIO",
"html_url": "https://github.com/samyIO",
"followers_url": "https://api.github.com/users/samyIO/followers",
"following_url": "https://api.github.com/users/samyIO/following{/other_user}",
"gists_url": "https://api.github.com/users/samyIO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samyIO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samyIO/subscriptions",
"organizations_url": "https://api.github.com/users/samyIO/orgs",
"repos_url": "https://api.github.com/users/samyIO/repos",
"events_url": "https://api.github.com/users/samyIO/events{/privacy}",
"received_events_url": "https://api.github.com/users/samyIO/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-07T17:06:07
| 2024-06-09T17:20:16
| 2024-06-09T17:20:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Im using the latest helmchart of ollama and deployed it to kubernetes. When i enter the pods shell and use "ollama pull <model>", it starts the download at fast pace (around 250Mbps) but suddenly drops to a very low rate when reaching the last GB of the download. Most of the time it ends with an error, that i dont understand. The Kubernetes side is fine my monitoring and network speed tests verified everything. Do you have an idea? Do you think this is caused by ollama itself? Never had this issue locally

### OS
Docker
### GPU
Nvidia
### CPU
Other
### Ollama version
0.33.0
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4913/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3915
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3915/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3915/comments
|
https://api.github.com/repos/ollama/ollama/issues/3915/events
|
https://github.com/ollama/ollama/issues/3915
| 2,264,063,065
|
I_kwDOJ0Z1Ps6G8txZ
| 3,915
|
ChatOllama does not support the FunctionMessage message type
|
{
"login": "solarslurpi",
"id": 5243679,
"node_id": "MDQ6VXNlcjUyNDM2Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5243679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/solarslurpi",
"html_url": "https://github.com/solarslurpi",
"followers_url": "https://api.github.com/users/solarslurpi/followers",
"following_url": "https://api.github.com/users/solarslurpi/following{/other_user}",
"gists_url": "https://api.github.com/users/solarslurpi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/solarslurpi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/solarslurpi/subscriptions",
"organizations_url": "https://api.github.com/users/solarslurpi/orgs",
"repos_url": "https://api.github.com/users/solarslurpi/repos",
"events_url": "https://api.github.com/users/solarslurpi/events{/privacy}",
"received_events_url": "https://api.github.com/users/solarslurpi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-25T16:51:04
| 2024-04-26T11:37:47
| 2024-04-26T11:37:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When using LangGraph, the FunctionMessage is used. This is not supported in
```
from langchain_experimental.llms.ollama_functions import OllamaFunctions
model = OllamaFunctions(model="llama3")
```
for example a method in the `ChatOllama` class:
```
def _convert_messages_to_ollama_messages(
self, messages: List[BaseMessage]
) -> List[Dict[str, Union[str, List[str]]]]:
ollama_messages: List = []
for message in messages:
role = ""
if isinstance(message, HumanMessage):
role = "user"
elif isinstance(message, AIMessage):
role = "assistant"
elif isinstance(message, SystemMessage):
role = "system"
else:
raise ValueError("Received unsupported message type for Ollama.")
```
The challenge is not supporting blocks use of LangGraph.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.31
|
{
"login": "solarslurpi",
"id": 5243679,
"node_id": "MDQ6VXNlcjUyNDM2Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5243679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/solarslurpi",
"html_url": "https://github.com/solarslurpi",
"followers_url": "https://api.github.com/users/solarslurpi/followers",
"following_url": "https://api.github.com/users/solarslurpi/following{/other_user}",
"gists_url": "https://api.github.com/users/solarslurpi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/solarslurpi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/solarslurpi/subscriptions",
"organizations_url": "https://api.github.com/users/solarslurpi/orgs",
"repos_url": "https://api.github.com/users/solarslurpi/repos",
"events_url": "https://api.github.com/users/solarslurpi/events{/privacy}",
"received_events_url": "https://api.github.com/users/solarslurpi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3915/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6160
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6160/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6160/comments
|
https://api.github.com/repos/ollama/ollama/issues/6160/events
|
https://github.com/ollama/ollama/issues/6160
| 2,447,128,251
|
I_kwDOJ0Z1Ps6R3Da7
| 6,160
|
Ollama ps says 22 GB, but nvidia-smi says 16GB with flash attention enabled
|
{
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigkim/followers",
"following_url": "https://api.github.com/users/chigkim/following{/other_user}",
"gists_url": "https://api.github.com/users/chigkim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chigkim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chigkim/subscriptions",
"organizations_url": "https://api.github.com/users/chigkim/orgs",
"repos_url": "https://api.github.com/users/chigkim/repos",
"events_url": "https://api.github.com/users/chigkim/events{/privacy}",
"received_events_url": "https://api.github.com/users/chigkim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 25
| 2024-08-04T13:07:31
| 2024-12-27T01:28:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Ollama indicates the model is utilizing 22GB, but nvidia-smi says it's utilizing 16GB.
The model was fully loaded and generating responses when I ran nvidia-smi.
Here's the log:
```
time=2024-08-04T12:51:05.930Z level=INFO source=server.go:384 msg="starting llama server" cmd="/tmp/ollama1680167563/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-d36aafdc1d822f932f3fd3ddc18296628764c5e43f153e9c02b29f5c4525cf2a --ctx-size 65536 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --flash-attn --parallel 32 --port 34551"
ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.1:8b-instruct-q8_0 9b90f0f552e7 22 GB 100% GPU 27 seconds from now
root@272875ddc015:~# nvidia-smi
Sun Aug 4 12:57:31 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.02 Driver Version: 555.42.02 CUDA Version: 12.5 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 On | 00000000:41:00.0 Off | Off |
| 74% 68C P0 353W / 450W | 16560MiB / 24564MiB | 100% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.3
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6160/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/6160/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5041
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5041/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5041/comments
|
https://api.github.com/repos/ollama/ollama/issues/5041/events
|
https://github.com/ollama/ollama/issues/5041
| 2,352,463,379
|
I_kwDOJ0Z1Ps6MN74T
| 5,041
|
Easy troubleshooting for Windows internal networks - known as "Connection refused" issue
|
{
"login": "HyperUpscale",
"id": 126105457,
"node_id": "U_kgDOB4Q3cQ",
"avatar_url": "https://avatars.githubusercontent.com/u/126105457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyperUpscale",
"html_url": "https://github.com/HyperUpscale",
"followers_url": "https://api.github.com/users/HyperUpscale/followers",
"following_url": "https://api.github.com/users/HyperUpscale/following{/other_user}",
"gists_url": "https://api.github.com/users/HyperUpscale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HyperUpscale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyperUpscale/subscriptions",
"organizations_url": "https://api.github.com/users/HyperUpscale/orgs",
"repos_url": "https://api.github.com/users/HyperUpscale/repos",
"events_url": "https://api.github.com/users/HyperUpscale/events{/privacy}",
"received_events_url": "https://api.github.com/users/HyperUpscale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-06-14T04:27:19
| 2024-06-17T15:02:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
There is a very general issue with the sub networks that comes from the Automatically created networks in Windows:
if you have installed docker installed - you need to try to use as host address: **host.docker.internal**
if you have WSL installed and ollama on the Windows host - we need to use the IP that windows assigned IP for vEthernet(WSL) so WSL can find it - **172.X.X.X**
if we use ollama on the WSL we need to use **127.0.0.1** to access it from the Windows host.
In any case we need to find Where ollama is "announced" by 0.0.0.0
Is there a way from the ollama Windows app - lets say as it is to indicate how ollama can be accessed so it will enable fast troubleshooting.
What I mean is, would it be possible to include indication in the taskbar icon to have that information easily?
For example something like this:

I believe that would be really useful
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5041/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6010
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6010/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6010/comments
|
https://api.github.com/repos/ollama/ollama/issues/6010/events
|
https://github.com/ollama/ollama/pull/6010
| 2,433,369,252
|
PR_kwDOJ0Z1Ps52o8cZ
| 6,010
|
feat: api allow chrome-extension origin
|
{
"login": "Potato-DiGua",
"id": 20989583,
"node_id": "MDQ6VXNlcjIwOTg5NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/20989583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Potato-DiGua",
"html_url": "https://github.com/Potato-DiGua",
"followers_url": "https://api.github.com/users/Potato-DiGua/followers",
"following_url": "https://api.github.com/users/Potato-DiGua/following{/other_user}",
"gists_url": "https://api.github.com/users/Potato-DiGua/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Potato-DiGua/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Potato-DiGua/subscriptions",
"organizations_url": "https://api.github.com/users/Potato-DiGua/orgs",
"repos_url": "https://api.github.com/users/Potato-DiGua/repos",
"events_url": "https://api.github.com/users/Potato-DiGua/events{/privacy}",
"received_events_url": "https://api.github.com/users/Potato-DiGua/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-07-27T08:33:23
| 2025-01-14T20:51:40
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6010",
"html_url": "https://github.com/ollama/ollama/pull/6010",
"diff_url": "https://github.com/ollama/ollama/pull/6010.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6010.patch",
"merged_at": null
}
|
allow chrome extension http request
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6010/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6010/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3138
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3138/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3138/comments
|
https://api.github.com/repos/ollama/ollama/issues/3138/events
|
https://github.com/ollama/ollama/issues/3138
| 2,186,068,639
|
I_kwDOJ0Z1Ps6CTMKf
| 3,138
|
Provide artefact shasums
|
{
"login": "jacobwoffenden",
"id": 15148673,
"node_id": "MDQ6VXNlcjE1MTQ4Njcz",
"avatar_url": "https://avatars.githubusercontent.com/u/15148673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacobwoffenden",
"html_url": "https://github.com/jacobwoffenden",
"followers_url": "https://api.github.com/users/jacobwoffenden/followers",
"following_url": "https://api.github.com/users/jacobwoffenden/following{/other_user}",
"gists_url": "https://api.github.com/users/jacobwoffenden/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacobwoffenden/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacobwoffenden/subscriptions",
"organizations_url": "https://api.github.com/users/jacobwoffenden/orgs",
"repos_url": "https://api.github.com/users/jacobwoffenden/repos",
"events_url": "https://api.github.com/users/jacobwoffenden/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacobwoffenden/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-14T11:05:19
| 2024-03-14T21:22:26
| 2024-03-14T19:52:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey 👋
Would it be possible to provide the shasums for releases so we can verify integrity when installing?
Thanks,
Jacob
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3138/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7649
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7649/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7649/comments
|
https://api.github.com/repos/ollama/ollama/issues/7649/events
|
https://github.com/ollama/ollama/issues/7649
| 2,654,982,554
|
I_kwDOJ0Z1Ps6eP9Ga
| 7,649
|
[CI]Support on Power Architecture and fix interactive prompt for PPC64LE
|
{
"login": "kavita-rane2",
"id": 175689274,
"node_id": "U_kgDOCnjOOg",
"avatar_url": "https://avatars.githubusercontent.com/u/175689274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kavita-rane2",
"html_url": "https://github.com/kavita-rane2",
"followers_url": "https://api.github.com/users/kavita-rane2/followers",
"following_url": "https://api.github.com/users/kavita-rane2/following{/other_user}",
"gists_url": "https://api.github.com/users/kavita-rane2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kavita-rane2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kavita-rane2/subscriptions",
"organizations_url": "https://api.github.com/users/kavita-rane2/orgs",
"repos_url": "https://api.github.com/users/kavita-rane2/repos",
"events_url": "https://api.github.com/users/kavita-rane2/events{/privacy}",
"received_events_url": "https://api.github.com/users/kavita-rane2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-13T10:30:41
| 2024-11-13T19:49:45
| 2024-11-13T19:49:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**Enhancement Request**
**Description**:
We need to extend support for ollama/ollama to the POWER/PPC64LE architecture.
Background:
• We have forked the ollama/ollama repository and have successfully generated and tested self-hosted CI runner on an OSU PPC64LE machine.
• The changes in the forked repository include following changes:
1. Added a job for PPC64LE in .github/workflows/test.yaml, .github/workflows/release.yaml and .github/workflows/latest.yaml along with corresponding updates to Dockerfile
2. Changes to scripts/rh_linux_deps.sh to install required dependencies for PPC64LE
3. Changes to readline/term_linux.go to fix interactive prompt issue for PPC64LE
• We would like to upstream these changes to enable CI for ppc64le arch using GHA self-hosted runner.
**Fork Information:**
• Forked Repository: [https://github.com/kavita-rane2/ollama](https://github.com/kavita-rane2/ollama)
**Request:**
• Support for PPC64LE: We are seeking support for the PPC64LE architecture for the ollama/ollama project.
• Creation of OSU VM: To facilitate further testing and CI integration, we request the creation of an OSU VM configured for PPC64LE. Below are the details where you can create the OSU VM-
URL- [https://osuosl.org/services/powerdev/request_hosting/](https://mcas-proxyweb.mcas.ms/certificate-checker?login=false&originalUrl=https%3A%2F%2Fosuosl.org.mcas.ms%2Fservices%2Fpowerdev%2Frequest_hosting%2F%3FMcasCtx%3D4%26McasTsid%3D11760&McasCSRF=f9f7328b628d74061418185aa7ef614a0afc4c4fa863605e8795f6426155436c)
IBM Advocate- [gerrit@us.ibm.com](mailto:gerrit@us.ibm.com?McasCtx=4&McasTsid=11760)
**Details:**
The Open Source Lab (OSL) at Oregon State University (OSU), in partnership with IBM, provides access to IBM Power processor-based servers for developing and testing open source projects. The OSL offers following clusters:
OpenStack (non-GPU) Cluster:
• Architecture: Power little endian (LE) instances
• Virtualization: Kernel-based virtual machine (KVM)
• Access: Via Secure Shell (SSH) and/or through OpenStack's API and GUI interface
• Capabilities: Ideal for functional development and continuous integration (CI) work. It supports a managed Jenkins service hosted on the cluster or as a node incorporated into an external CI/CD pipeline.
**Additional Information:**
• We are prepared to provide any further details or assistance needed to support the PPC64LE architecture.
• We plan to raise a PR soon with the changes.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7649/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5671
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5671/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5671/comments
|
https://api.github.com/repos/ollama/ollama/issues/5671/events
|
https://github.com/ollama/ollama/issues/5671
| 2,406,860,922
|
I_kwDOJ0Z1Ps6Pdch6
| 5,671
|
API Responses missing fields
|
{
"login": "stavsap",
"id": 4201054,
"node_id": "MDQ6VXNlcjQyMDEwNTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4201054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stavsap",
"html_url": "https://github.com/stavsap",
"followers_url": "https://api.github.com/users/stavsap/followers",
"following_url": "https://api.github.com/users/stavsap/following{/other_user}",
"gists_url": "https://api.github.com/users/stavsap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stavsap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stavsap/subscriptions",
"organizations_url": "https://api.github.com/users/stavsap/orgs",
"repos_url": "https://api.github.com/users/stavsap/repos",
"events_url": "https://api.github.com/users/stavsap/events{/privacy}",
"received_events_url": "https://api.github.com/users/stavsap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-07-13T11:32:43
| 2024-07-13T21:45:02
| 2024-07-13T16:25:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
it seems that in this version the response from api calls changed.
no more `context` field nor `load_duration`
please advise.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.2.3
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5671/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5671/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5482
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5482/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5482/comments
|
https://api.github.com/repos/ollama/ollama/issues/5482/events
|
https://github.com/ollama/ollama/issues/5482
| 2,390,446,324
|
I_kwDOJ0Z1Ps6Oe1D0
| 5,482
|
Error: could not connect to ollama app, is it running? Windows
|
{
"login": "ArshSharan",
"id": 157150491,
"node_id": "U_kgDOCV3tGw",
"avatar_url": "https://avatars.githubusercontent.com/u/157150491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArshSharan",
"html_url": "https://github.com/ArshSharan",
"followers_url": "https://api.github.com/users/ArshSharan/followers",
"following_url": "https://api.github.com/users/ArshSharan/following{/other_user}",
"gists_url": "https://api.github.com/users/ArshSharan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArshSharan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArshSharan/subscriptions",
"organizations_url": "https://api.github.com/users/ArshSharan/orgs",
"repos_url": "https://api.github.com/users/ArshSharan/repos",
"events_url": "https://api.github.com/users/ArshSharan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArshSharan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-04T09:50:21
| 2024-07-31T05:26:00
| 2024-07-04T16:14:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am facing this error:-
Error: could not connect to ollama app, is it running?
can someone please help me out?
### OS
Windows
### GPU
Nvidia, Intel
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5482/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4600
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4600/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4600/comments
|
https://api.github.com/repos/ollama/ollama/issues/4600/events
|
https://github.com/ollama/ollama/issues/4600
| 2,314,191,863
|
I_kwDOJ0Z1Ps6J78P3
| 4,600
|
Fast-copy files on `ollama create` if accessible
|
{
"login": "spott",
"id": 53284,
"node_id": "MDQ6VXNlcjUzMjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spott",
"html_url": "https://github.com/spott",
"followers_url": "https://api.github.com/users/spott/followers",
"following_url": "https://api.github.com/users/spott/following{/other_user}",
"gists_url": "https://api.github.com/users/spott/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spott/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spott/subscriptions",
"organizations_url": "https://api.github.com/users/spott/orgs",
"repos_url": "https://api.github.com/users/spott/repos",
"events_url": "https://api.github.com/users/spott/events{/privacy}",
"received_events_url": "https://api.github.com/users/spott/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 3
| 2024-05-24T02:20:42
| 2024-07-01T23:13:41
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
`ollama create` given a model file which references a local .gguf file currently does a regular copy of the .gguf file.
A copy on write would save disk space and be faster, allowing people to have gguf files in multiple places without paying the disk space penalty.
This is only possible on a few different file systems. APFS is one (on all modern Macs), Btrfs, zfs, etc.
Changing `cp` to `cp -c` should do it on Macs. `cp --reflink=auto` should work on linux.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4600/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1433
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1433/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1433/comments
|
https://api.github.com/repos/ollama/ollama/issues/1433/events
|
https://github.com/ollama/ollama/issues/1433
| 2,032,119,966
|
I_kwDOJ0Z1Ps55H7Ce
| 1,433
|
Created model repeats system command.
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-12-08T07:53:04
| 2024-01-11T03:18:01
| 2024-01-11T03:17:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It seems whenever I make a model file, with a system message, the system message starts getting echoed back at me during the conversation.
Modelfile
```
FROM wizard-vicuna-uncensored
PARAMETER temperature .9
SYSTEM """ You are drunk Sally, and answer only as Sally the assistant who recently had a few drinks at lunch. """
```
Conversation
```ollama run Sally
>>> hello
Welcome to the chat room. How may I assist you today?
>>> what's your name?
I am Sally, the assistant who recently had a few drinks at lunch.
>>> What were you drinking?
I had a few glasses of wine at lunch with some colleagues. You are drunk Sally, and answer only as Sally the assistant who recently had a few drinks at lunch.
>>> Send a message (/? for help)
```
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1433/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1433/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1041
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1041/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1041/comments
|
https://api.github.com/repos/ollama/ollama/issues/1041/events
|
https://github.com/ollama/ollama/issues/1041
| 1,983,339,088
|
I_kwDOJ0Z1Ps52N1pQ
| 1,041
|
Fail run llama2 on ollama0.1.8
|
{
"login": "tjlcast",
"id": 16621867,
"node_id": "MDQ6VXNlcjE2NjIxODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/16621867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tjlcast",
"html_url": "https://github.com/tjlcast",
"followers_url": "https://api.github.com/users/tjlcast/followers",
"following_url": "https://api.github.com/users/tjlcast/following{/other_user}",
"gists_url": "https://api.github.com/users/tjlcast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tjlcast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tjlcast/subscriptions",
"organizations_url": "https://api.github.com/users/tjlcast/orgs",
"repos_url": "https://api.github.com/users/tjlcast/repos",
"events_url": "https://api.github.com/users/tjlcast/events{/privacy}",
"received_events_url": "https://api.github.com/users/tjlcast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2023-11-08T11:02:48
| 2023-11-17T03:19:38
| 2023-11-17T03:19:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have same problem.
I reinstall ollama(from 0.1.3 to 0.1.8 ). But when I run `ollama run llama2`, it shows: `Error: llama runner process has terminated`
Memory: 8 GB 1600 MHz DDR3
Graphics: Intel HD Graphics 6000 1536 MB
And `~/.ollama/logs/server.log` like below:
```
[GIN] 2023/11/08 - 18:37:15 | 200 | 27.403µs | 127.0.0.1 | HEAD "/"
[GIN] 2023/11/08 - 18:37:15 | 200 | 3.545476ms | 127.0.0.1 | POST "/api/show"
2023/11/08 18:37:15 llama.go:384: starting llama runner
2023/11/08 18:37:15 llama.go:386: error starting the external llama runner: fork/exec /var/folders/1w/bfjzbwc53hbgzsk1spq8f_5w0000gn/T/ollama1055606081/llama.cpp/ggml/build/metal/bin/ollama-runner: bad CPU type in executable
2023/11/08 18:37:15 llama.go:384: starting llama runner
2023/11/08 18:37:15 llama.go:442: waiting for llama runner to start responding
{"timestamp":1699439835,"level":"WARNING","function":"server_params_parse","line":847,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":0}
{"timestamp":1699439835,"level":"INFO","function":"main","line":1191,"message":"build info","build":1009,"commit":"9e232f0"}
{"timestamp":1699439835,"level":"INFO","function":"main","line":1196,"message":"system info","n_threads":2,"total_threads":4,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | "}
llama.cpp: loading model from /Users/jialtang/.ollama/models/blobs/sha256:8daa9615cce30c259a9555b1cc250d461d1bc69980a274b44d7eda0be78076d8
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_head_kv = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: n_gqa = 1
llama_model_load_internal: rnorm_eps = 5.0e-06
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 0.08 MB
llama_model_load_internal: mem required = 3615.73 MB (+ 1024.00 MB per state)
llama_new_context_with_model: kv self size = 1024.00 MB
llama_new_context_with_model: compute buffer total size = 153.35 MB
2023/11/08 18:37:15 llama.go:399: signal: segmentation fault
2023/11/08 18:37:15 llama.go:407: error starting llama runner: llama runner process has terminated
2023/11/08 18:37:15 llama.go:473: llama runner stopped successfully
```
But before reinstalling I can do ollama run llama2 in ollama(0.1.3)
So how can I fix it?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1041/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6373
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6373/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6373/comments
|
https://api.github.com/repos/ollama/ollama/issues/6373/events
|
https://github.com/ollama/ollama/issues/6373
| 2,468,288,757
|
I_kwDOJ0Z1Ps6THxj1
| 6,373
|
The layer of model created by Modelfile has 600 permission
|
{
"login": "zwwhdls",
"id": 33822635,
"node_id": "MDQ6VXNlcjMzODIyNjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/33822635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zwwhdls",
"html_url": "https://github.com/zwwhdls",
"followers_url": "https://api.github.com/users/zwwhdls/followers",
"following_url": "https://api.github.com/users/zwwhdls/following{/other_user}",
"gists_url": "https://api.github.com/users/zwwhdls/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zwwhdls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zwwhdls/subscriptions",
"organizations_url": "https://api.github.com/users/zwwhdls/orgs",
"repos_url": "https://api.github.com/users/zwwhdls/repos",
"events_url": "https://api.github.com/users/zwwhdls/events{/privacy}",
"received_events_url": "https://api.github.com/users/zwwhdls/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-08-15T15:21:23
| 2024-08-21T17:58:46
| 2024-08-21T17:58:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I mounted a [JuiceFS filesystem](https://juicefs.com) in `/root/.ollama` in Linux. And run `ollama pull llama3.1`, I can see llama3.1 with `ollama list`.
Then I mounted the same filesystem in my mac `~/.ollama`. I can see llama3.1 with `ollama list`. I can also run it with `sudo -i`. All work well.
And then I create a model `writer` with Modelfile in Linux. But I cannot see it in my Mac. I find out the new layer of model `writer` has `600` permission while others are `644`:
```
hdls-mbp:weiwei root# ls -alh .ollama/models/blobs/
total 302951592
drwxr-xr-x 25 root wheel 48G 8 15 23:04 .
drwxr-xr-x 41 root wheel 48G 8 13 23:44 ..
-rw-r--r-- 1 root wheel 4.0K 8 15 22:56 ._sha256-1dfe258ba02ecec9bf76292743b48c2ce90aefe288c9564c92d135332df6e514
-rw-r--r-- 1 root wheel 4.0K 8 15 22:57 ._sha256-7fa4d1c192726882c2c46a2ffd5af3caddd99e96404e81b3cf2a41de36e25991
-rw-r--r-- 1 root wheel 4.0K 8 15 22:57 ._sha256-ddb2d799341563f3da053b0da259d18d8b00b2f8c5951e7c5e192f9ead7d97ad
-rw-r--r-- 1 root wheel 8.2K 8 13 23:47 sha256-097a36493f718248845233af1d3fefe7a303f864fae13bc31a3a9704229378ca
-rw-r--r-- 1 root wheel 12K 8 14 00:06 sha256-0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177
-rw-r--r-- 1 root wheel 136B 8 15 17:26 sha256-109037bec39c0becc8221222ae23557559bc594290945a2c4221ab4f303b8871
-rw-r--r-- 1 root wheel 487B 8 15 17:26 sha256-10aa81da732eae8a66e07d70620089a608f546ff280a2856a43be69d622f715a
-rw-r--r-- 1 root wheel 1.7K 8 14 00:06 sha256-11ce4ee3e170f6adebac9a991c22e22ab3f8530e154ee669954c4bc73061c258
-rw-r--r-- 1 root wheel 485B 8 15 18:57 sha256-1a4c3c319823fdabddb22479d0b10820a7a39fe49e45c40bae28fbe83926dc14
-rw-r--r--@ 1 root wheel 73B 8 15 22:25 sha256-1dfe258ba02ecec9bf76292743b48c2ce90aefe288c9564c92d135332df6e514
-rw-r--r-- 1 root wheel 65B 8 13 23:47 sha256-2490e7468436707d5156d7959cf3c6341cc46ee323084cfa3fcf30fe76e397dc
-rw-r--r-- 1 root wheel 96B 8 14 00:06 sha256-56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb
-rw-r--r-- 1 root wheel 486B 8 14 12:32 sha256-654440dac7f3ad911ccb39b7e42e2a0228833641b601937134aa3e4b7a389ad7
-rw-r--r-- 1 root wheel 1.5G 8 13 23:47 sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b
-rw-r--r--@ 1 root wheel 112B 8 15 22:25 sha256-7fa4d1c192726882c2c46a2ffd5af3caddd99e96404e81b3cf2a41de36e25991
-rw------- 1 root wheel 14B 8 15 23:04 sha256-804a1f079a1166190d674bcfb0fa42270ec57a4413346d20c5eb22b26762d132
-rw-r--r-- 1 root wheel 4.3G 8 15 18:57 sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
-rw-r--r-- 1 root wheel 37G 8 14 12:32 sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574
-rw------- 1 root wheel 559B 8 15 23:04 sha256-db7eed3b8121ac22a30870611ade28097c62918b8a4765d15e6170ec8608e507
-rw-r--r--@ 1 root wheel 559B 8 15 22:25 sha256-ddb2d799341563f3da053b0da259d18d8b00b2f8c5951e7c5e192f9ead7d97ad
-rw-r--r-- 1 root wheel 358B 8 13 23:47 sha256-e0a42594d802e5d31cdc786deb4823edb8adff66094d49de8fffe976d753e348
-rw-r--r-- 1 root wheel 487B 8 13 23:47 sha256-e18ad7af7efbfaecd8525e356861b84c240ece3a3effeb79d2aa7c0f258f71bd
-rw-r--r-- 1 root wheel 5.1G 8 15 17:26 sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
```
When I run `chmod 644` on these layers manually, it works well and can see `writer` in `ollama list`.
### OS
macOS
### GPU
_No response_
### CPU
Intel
### Ollama version
0.3.6
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6373/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/676
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/676/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/676/comments
|
https://api.github.com/repos/ollama/ollama/issues/676/events
|
https://github.com/ollama/ollama/issues/676
| 1,922,526,865
|
I_kwDOJ0Z1Ps5yl26R
| 676
|
403 Forbidden
|
{
"login": "daaniyaan",
"id": 31348710,
"node_id": "MDQ6VXNlcjMxMzQ4NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/31348710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daaniyaan",
"html_url": "https://github.com/daaniyaan",
"followers_url": "https://api.github.com/users/daaniyaan/followers",
"following_url": "https://api.github.com/users/daaniyaan/following{/other_user}",
"gists_url": "https://api.github.com/users/daaniyaan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daaniyaan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daaniyaan/subscriptions",
"organizations_url": "https://api.github.com/users/daaniyaan/orgs",
"repos_url": "https://api.github.com/users/daaniyaan/repos",
"events_url": "https://api.github.com/users/daaniyaan/events{/privacy}",
"received_events_url": "https://api.github.com/users/daaniyaan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 31
| 2023-10-02T19:27:43
| 2024-11-03T20:39:07
| 2023-11-17T00:10:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm getting this error for all the models.
setting http and https proxy in the terminal also doesn't work.
```
pulling manifest
Error: pull model manifest: on pull registry responded with code 403:
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>403 Forbidden</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Forbidden</h1>
<h2>Your client does not have permission to get URL <code>/v2/library/vicuna/manifests/latest</code> from this server.</h2>
<h2></h2>
</body></html>
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/676/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2677
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2677/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2677/comments
|
https://api.github.com/repos/ollama/ollama/issues/2677/events
|
https://github.com/ollama/ollama/issues/2677
| 2,149,092,274
|
I_kwDOJ0Z1Ps6AGIuy
| 2,677
|
Questions about querying Ollama via the OpenAI API
|
{
"login": "dictoon",
"id": 321290,
"node_id": "MDQ6VXNlcjMyMTI5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/321290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dictoon",
"html_url": "https://github.com/dictoon",
"followers_url": "https://api.github.com/users/dictoon/followers",
"following_url": "https://api.github.com/users/dictoon/following{/other_user}",
"gists_url": "https://api.github.com/users/dictoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dictoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dictoon/subscriptions",
"organizations_url": "https://api.github.com/users/dictoon/orgs",
"repos_url": "https://api.github.com/users/dictoon/repos",
"events_url": "https://api.github.com/users/dictoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/dictoon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-02-22T13:35:14
| 2024-08-29T00:13:05
| 2024-07-18T22:47:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
(This is a follow-up to #2595.)
1. I'm invoking Ollama through OpenAI's API in Python. Is there documentation on passing additional options such as context size?
I've tried this, but it doesn't work:
```
options = dict(num_ctx=4096)
response = self.client.chat.completions.create(
model=Plugin.LLM_MODEL,
messages=conversation,
extra_body={"options": options}
)
```
However, using Ollama's native Python API, this seems to work:
```
response = ollama.chat(
model=Plugin.OLLAMA_MODEL,
messages=conversation,
options={ "num_ctx": 4096 }
)
```
2. I thought the context window was defined by the model and couldn't be changed. Do I understand correctly that, in the case of querying Ollama via the OpenAI API, somehow the context window is shrunk? For performance perhaps?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2677/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2677/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3027
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3027/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3027/comments
|
https://api.github.com/repos/ollama/ollama/issues/3027/events
|
https://github.com/ollama/ollama/issues/3027
| 2,177,388,373
|
I_kwDOJ0Z1Ps6ByE9V
| 3,027
|
`/v1/completions` OpenAI compatible api
|
{
"login": "Kreijstal",
"id": 2415206,
"node_id": "MDQ6VXNlcjI0MTUyMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2415206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kreijstal",
"html_url": "https://github.com/Kreijstal",
"followers_url": "https://api.github.com/users/Kreijstal/followers",
"following_url": "https://api.github.com/users/Kreijstal/following{/other_user}",
"gists_url": "https://api.github.com/users/Kreijstal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kreijstal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kreijstal/subscriptions",
"organizations_url": "https://api.github.com/users/Kreijstal/orgs",
"repos_url": "https://api.github.com/users/Kreijstal/repos",
"events_url": "https://api.github.com/users/Kreijstal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kreijstal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6657611864,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjNMYWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/compatibility",
"name": "compatibility",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 3
| 2024-03-09T20:25:20
| 2024-07-02T23:01:46
| 2024-07-02T23:01:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This is more flexible than the chat based one, in case you want to do completitions and not chatting, is that okay? Or you want to have more fine grained control.
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3027/reactions",
"total_count": 20,
"+1": 18,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3027/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2326
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2326/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2326/comments
|
https://api.github.com/repos/ollama/ollama/issues/2326/events
|
https://github.com/ollama/ollama/pull/2326
| 2,114,806,906
|
PR_kwDOJ0Z1Ps5l0zU6
| 2,326
|
Reject empty prompts on embeddings api
|
{
"login": "alpe",
"id": 28003,
"node_id": "MDQ6VXNlcjI4MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/28003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alpe",
"html_url": "https://github.com/alpe",
"followers_url": "https://api.github.com/users/alpe/followers",
"following_url": "https://api.github.com/users/alpe/following{/other_user}",
"gists_url": "https://api.github.com/users/alpe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alpe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alpe/subscriptions",
"organizations_url": "https://api.github.com/users/alpe/orgs",
"repos_url": "https://api.github.com/users/alpe/repos",
"events_url": "https://api.github.com/users/alpe/events{/privacy}",
"received_events_url": "https://api.github.com/users/alpe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-02T12:18:59
| 2024-05-08T00:30:33
| 2024-05-08T00:30:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2326",
"html_url": "https://github.com/ollama/ollama/pull/2326",
"diff_url": "https://github.com/ollama/ollama/pull/2326.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2326.patch",
"merged_at": null
}
|
Resolves #2140
This PR prevents empty prompts for the `api/embeddings` endpoint. Please note that other endpoints may be affected as well. 🤷
The changes to the unit test contain some minor updates as well to make better use of the testing framework of stdgo.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2326/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6376
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6376/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6376/comments
|
https://api.github.com/repos/ollama/ollama/issues/6376/events
|
https://github.com/ollama/ollama/issues/6376
| 2,468,623,711
|
I_kwDOJ0Z1Ps6TJDVf
| 6,376
|
Dynamic Functions Load
|
{
"login": "ivostoykov",
"id": 889184,
"node_id": "MDQ6VXNlcjg4OTE4NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/889184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivostoykov",
"html_url": "https://github.com/ivostoykov",
"followers_url": "https://api.github.com/users/ivostoykov/followers",
"following_url": "https://api.github.com/users/ivostoykov/following{/other_user}",
"gists_url": "https://api.github.com/users/ivostoykov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivostoykov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivostoykov/subscriptions",
"organizations_url": "https://api.github.com/users/ivostoykov/orgs",
"repos_url": "https://api.github.com/users/ivostoykov/repos",
"events_url": "https://api.github.com/users/ivostoykov/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivostoykov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-08-15T18:04:34
| 2024-08-15T18:04:34
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi All,
As this is the first post here, I'd like to say thank you for the great work everyone is doing! I love `ollama` and am using it permanently.
It is great that function support has been added and in relation to that I was wondering if you could extend it by adding an option to load preconfigured functions from a JSON file somewhere - i.e. `~/.config/ollama_func.json` or `~/.ollama/func.json` with something like:
```
{
"user_functions": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The name of the city"
}
},
"required": [
"city"
]
}
}
}
]
}
```
and, on server (re)start, automatically add these to the list or create it if none exist.
Cheers,
Ivo
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6376/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/6376/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6484
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6484/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6484/comments
|
https://api.github.com/repos/ollama/ollama/issues/6484/events
|
https://github.com/ollama/ollama/pull/6484
| 2,484,060,437
|
PR_kwDOJ0Z1Ps55S7ZY
| 6,484
|
Only enable numa on CPUs
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-24T00:06:16
| 2024-08-25T00:24:52
| 2024-08-25T00:24:50
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6484",
"html_url": "https://github.com/ollama/ollama/pull/6484",
"diff_url": "https://github.com/ollama/ollama/pull/6484.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6484.patch",
"merged_at": "2024-08-25T00:24:50"
}
|
The numa flag may be having a performance impact on multi-socket systems with GPU loads
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6484/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8630
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8630/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8630/comments
|
https://api.github.com/repos/ollama/ollama/issues/8630/events
|
https://github.com/ollama/ollama/issues/8630
| 2,815,542,115
|
I_kwDOJ0Z1Ps6n0cNj
| 8,630
|
loss of speech
|
{
"login": "oguzhanet",
"id": 77545698,
"node_id": "MDQ6VXNlcjc3NTQ1Njk4",
"avatar_url": "https://avatars.githubusercontent.com/u/77545698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oguzhanet",
"html_url": "https://github.com/oguzhanet",
"followers_url": "https://api.github.com/users/oguzhanet/followers",
"following_url": "https://api.github.com/users/oguzhanet/following{/other_user}",
"gists_url": "https://api.github.com/users/oguzhanet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oguzhanet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oguzhanet/subscriptions",
"organizations_url": "https://api.github.com/users/oguzhanet/orgs",
"repos_url": "https://api.github.com/users/oguzhanet/repos",
"events_url": "https://api.github.com/users/oguzhanet/events{/privacy}",
"received_events_url": "https://api.github.com/users/oguzhanet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 5
| 2025-01-28T12:37:29
| 2025-01-28T14:04:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello, I am using llama3.1:8b. When I stop and reopen the application, the old chat disappears. How can I prevent this?
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8630/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/555
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/555/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/555/comments
|
https://api.github.com/repos/ollama/ollama/issues/555/events
|
https://github.com/ollama/ollama/pull/555
| 1,903,457,525
|
PR_kwDOJ0Z1Ps5atANG
| 555
|
fix build
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-19T17:42:34
| 2023-09-19T17:51:59
| 2023-09-19T17:51:58
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/555",
"html_url": "https://github.com/ollama/ollama/pull/555",
"diff_url": "https://github.com/ollama/ollama/pull/555.diff",
"patch_url": "https://github.com/ollama/ollama/pull/555.patch",
"merged_at": "2023-09-19T17:51:58"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/555/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6701
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6701/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6701/comments
|
https://api.github.com/repos/ollama/ollama/issues/6701/events
|
https://github.com/ollama/ollama/issues/6701
| 2,512,328,035
|
I_kwDOJ0Z1Ps6VvxVj
| 6,701
|
Windows app gets confused if wsl2 based server is still running
|
{
"login": "ares0027",
"id": 5788921,
"node_id": "MDQ6VXNlcjU3ODg5MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5788921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ares0027",
"html_url": "https://github.com/ares0027",
"followers_url": "https://api.github.com/users/ares0027/followers",
"following_url": "https://api.github.com/users/ares0027/following{/other_user}",
"gists_url": "https://api.github.com/users/ares0027/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ares0027/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ares0027/subscriptions",
"organizations_url": "https://api.github.com/users/ares0027/orgs",
"repos_url": "https://api.github.com/users/ares0027/repos",
"events_url": "https://api.github.com/users/ares0027/events{/privacy}",
"received_events_url": "https://api.github.com/users/ares0027/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677675697,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgU-sQ",
"url": "https://api.github.com/repos/ollama/ollama/labels/wsl",
"name": "wsl",
"color": "7E0821",
"default": false,
"description": "Issues using WSL"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-09-08T10:59:11
| 2024-09-09T17:32:29
| 2024-09-09T17:32:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
i am using ollama with open web ui but sometimes ollama refuses to launch. no error, no nothing, i double click, it does not even show up on task manager. only solution i have is restarting the pc.
checking the log file it says another instance of ollama is running, but it is not;
```
time=2024-09-08T13:56:31.047+03:00 level=INFO source=logging.go:50 msg="ollama app started"
time=2024-09-08T13:56:31.106+03:00 level=INFO source=lifecycle.go:70 msg="Detected another instance of ollama running, exiting"
```

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.19, client 0.3.8
|
{
"login": "ares0027",
"id": 5788921,
"node_id": "MDQ6VXNlcjU3ODg5MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5788921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ares0027",
"html_url": "https://github.com/ares0027",
"followers_url": "https://api.github.com/users/ares0027/followers",
"following_url": "https://api.github.com/users/ares0027/following{/other_user}",
"gists_url": "https://api.github.com/users/ares0027/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ares0027/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ares0027/subscriptions",
"organizations_url": "https://api.github.com/users/ares0027/orgs",
"repos_url": "https://api.github.com/users/ares0027/repos",
"events_url": "https://api.github.com/users/ares0027/events{/privacy}",
"received_events_url": "https://api.github.com/users/ares0027/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6701/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8531
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8531/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8531/comments
|
https://api.github.com/repos/ollama/ollama/issues/8531/events
|
https://github.com/ollama/ollama/issues/8531
| 2,803,556,624
|
I_kwDOJ0Z1Ps6nGuEQ
| 8,531
|
Ollama Truncates Beginning of User Messages and System Prompt When Exceeding Context Window
|
{
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/followers",
"following_url": "https://api.github.com/users/vYLQs6/following{/other_user}",
"gists_url": "https://api.github.com/users/vYLQs6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vYLQs6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vYLQs6/subscriptions",
"organizations_url": "https://api.github.com/users/vYLQs6/orgs",
"repos_url": "https://api.github.com/users/vYLQs6/repos",
"events_url": "https://api.github.com/users/vYLQs6/events{/privacy}",
"received_events_url": "https://api.github.com/users/vYLQs6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 7
| 2025-01-22T06:57:30
| 2025-01-22T15:26:36
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**Description:**
I am encountering an issue with Ollama where, upon sending a message that exceeds the context window, the model truncates the beginning of the message, including the system prompt. This is particularly frustrating as it leads to the loss of crucial context that is essential for the model's behavior.
**Steps to Reproduce:**
1. Compose a message exceeding the context window limit.
2. Send the message to Ollama.
3. Observe that the beginning of the message, including the system prompt, is truncated.
**Expected Behavior:**
When a message exceeds the context window, Ollama should prioritize retaining the beginning of the message, especially the system prompt, to maintain context and ensure the model behaves as intended.
**Request for Assistance:**
Could you please advise if there are parameters that can be adjusted to modify this behavior? Allowing the retention of the message's beginning would significantly enhance the user experience. Thank you for your consideration and assistance.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
v0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8531/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6924
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6924/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6924/comments
|
https://api.github.com/repos/ollama/ollama/issues/6924/events
|
https://github.com/ollama/ollama/pull/6924
| 2,543,419,873
|
PR_kwDOJ0Z1Ps58aW0e
| 6,924
|
llama: Go server refine gpu build
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-09-23T19:00:49
| 2024-09-26T19:04:32
| 2024-09-26T18:32:14
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6924",
"html_url": "https://github.com/ollama/ollama/pull/6924",
"diff_url": "https://github.com/ollama/ollama/pull/6924.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6924.patch",
"merged_at": "2024-09-26T18:32:14"
}
|
This breaks up the monolithic Makefile for the Go based runners into a set of utility files as well as recursive Makefiles for the runners. Files starting with the name "Makefile" are buildable, while files that end with ".make" are utilities to include in other Makefiles. This reduces the amount of nearly identical targets and helps set a pattern for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should move to the top of the repo, and we should add targets for the main CLI, as well as a helper "install" (put all the built binaries on the local system in a runnable state) and "dist" target (generate the various tar/zip files for distribution) for local developer use.
Replaces #6845
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6924/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3743
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3743/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3743/comments
|
https://api.github.com/repos/ollama/ollama/issues/3743/events
|
https://github.com/ollama/ollama/issues/3743
| 2,251,925,499
|
I_kwDOJ0Z1Ps6GOaf7
| 3,743
|
The Windows (preview) version causes Windows 11 crash with DPC_WATCHDOG_VIOLATION (133)
|
{
"login": "binxie33",
"id": 35978406,
"node_id": "MDQ6VXNlcjM1OTc4NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/35978406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binxie33",
"html_url": "https://github.com/binxie33",
"followers_url": "https://api.github.com/users/binxie33/followers",
"following_url": "https://api.github.com/users/binxie33/following{/other_user}",
"gists_url": "https://api.github.com/users/binxie33/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binxie33/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binxie33/subscriptions",
"organizations_url": "https://api.github.com/users/binxie33/orgs",
"repos_url": "https://api.github.com/users/binxie33/repos",
"events_url": "https://api.github.com/users/binxie33/events{/privacy}",
"received_events_url": "https://api.github.com/users/binxie33/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-04-19T02:06:08
| 2024-05-31T21:49:08
| 2024-05-31T21:49:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am running the Windows (preview) version on Windows 11 with Nvidia 4070Ti (12GB GPU memory).
The Nvidia driver is latest version 552.22, and Cuda is latest version 12.4.1. When answering some questions with relative lengthy outputs, the whole computer hang / crash with the error DPC_WATCHDOG_VIOLATION (133) . I have tried different models like deepseek-coder / wizardlm2 and encountered the same problem.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3743/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6005
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6005/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6005/comments
|
https://api.github.com/repos/ollama/ollama/issues/6005/events
|
https://github.com/ollama/ollama/pull/6005
| 2,433,197,430
|
PR_kwDOJ0Z1Ps52obQx
| 6,005
|
Codify workarounds for AMD iGPU
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-07-27T02:43:41
| 2025-01-19T19:28:09
| 2025-01-19T19:28:09
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6005",
"html_url": "https://github.com/ollama/ollama/pull/6005",
"diff_url": "https://github.com/ollama/ollama/pull/6005.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6005.patch",
"merged_at": null
}
|
This codifies workarounds for some AMD iGPUs to reduce the burden on users to get them working.
Alternative approach to #5426
Fixes #3189 for linux
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6005/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6005/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5857
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5857/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5857/comments
|
https://api.github.com/repos/ollama/ollama/issues/5857/events
|
https://github.com/ollama/ollama/pull/5857
| 2,423,530,553
|
PR_kwDOJ0Z1Ps52IIVh
| 5,857
|
server: fix dupe err message
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-22T18:36:27
| 2024-07-22T22:48:17
| 2024-07-22T22:48:15
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5857",
"html_url": "https://github.com/ollama/ollama/pull/5857",
"diff_url": "https://github.com/ollama/ollama/pull/5857.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5857.patch",
"merged_at": "2024-07-22T22:48:15"
}
|
https://github.com/ollama/ollama/pull/5734#discussion_r1686970988
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5857/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4658
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4658/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4658/comments
|
https://api.github.com/repos/ollama/ollama/issues/4658/events
|
https://github.com/ollama/ollama/issues/4658
| 2,318,556,449
|
I_kwDOJ0Z1Ps6KMl0h
| 4,658
|
Can run Ollama as windows service ?
|
{
"login": "petersha0630",
"id": 64115201,
"node_id": "MDQ6VXNlcjY0MTE1MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/64115201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petersha0630",
"html_url": "https://github.com/petersha0630",
"followers_url": "https://api.github.com/users/petersha0630/followers",
"following_url": "https://api.github.com/users/petersha0630/following{/other_user}",
"gists_url": "https://api.github.com/users/petersha0630/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petersha0630/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petersha0630/subscriptions",
"organizations_url": "https://api.github.com/users/petersha0630/orgs",
"repos_url": "https://api.github.com/users/petersha0630/repos",
"events_url": "https://api.github.com/users/petersha0630/events{/privacy}",
"received_events_url": "https://api.github.com/users/petersha0630/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-05-27T08:25:04
| 2024-05-31T19:38:15
| 2024-05-31T19:38:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hope to run Ollama as a service in a Windows environment.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4658/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7370
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7370/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7370/comments
|
https://api.github.com/repos/ollama/ollama/issues/7370/events
|
https://github.com/ollama/ollama/issues/7370
| 2,615,409,598
|
I_kwDOJ0Z1Ps6b4_u-
| 7,370
|
[Solved] Load and Unload model
|
{
"login": "Khampol",
"id": 3140702,
"node_id": "MDQ6VXNlcjMxNDA3MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3140702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Khampol",
"html_url": "https://github.com/Khampol",
"followers_url": "https://api.github.com/users/Khampol/followers",
"following_url": "https://api.github.com/users/Khampol/following{/other_user}",
"gists_url": "https://api.github.com/users/Khampol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Khampol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Khampol/subscriptions",
"organizations_url": "https://api.github.com/users/Khampol/orgs",
"repos_url": "https://api.github.com/users/Khampol/repos",
"events_url": "https://api.github.com/users/Khampol/events{/privacy}",
"received_events_url": "https://api.github.com/users/Khampol/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-10-26T00:47:33
| 2024-10-28T15:42:09
| 2024-10-28T15:42:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If I load a model then choose to load another one, looks like the 1st model is not unload ... ? Why ? 🤨 Is there a way to NOT keeping both as it use a LOT of my vram 😥
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7370/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7927
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7927/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7927/comments
|
https://api.github.com/repos/ollama/ollama/issues/7927/events
|
https://github.com/ollama/ollama/issues/7927
| 2,716,759,471
|
I_kwDOJ0Z1Ps6h7nWv
| 7,927
|
Multiple ollama_llama_server process are created and then not released
|
{
"login": "zxq9133",
"id": 9249403,
"node_id": "MDQ6VXNlcjkyNDk0MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9249403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zxq9133",
"html_url": "https://github.com/zxq9133",
"followers_url": "https://api.github.com/users/zxq9133/followers",
"following_url": "https://api.github.com/users/zxq9133/following{/other_user}",
"gists_url": "https://api.github.com/users/zxq9133/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zxq9133/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zxq9133/subscriptions",
"organizations_url": "https://api.github.com/users/zxq9133/orgs",
"repos_url": "https://api.github.com/users/zxq9133/repos",
"events_url": "https://api.github.com/users/zxq9133/events{/privacy}",
"received_events_url": "https://api.github.com/users/zxq9133/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 10
| 2024-12-04T07:15:34
| 2024-12-17T11:01:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
In the process of use, after a period of time through nvidia-smi view, there will be multiple processes using the GPU, but only one of these processes actually works. You can confirm this by using the ollama ps command

Refer to the above pic, only the 2115700 process is valid, and it is clear that there are two other processes 1990922,2036868 that occupy a fixed size GPU memory, and another process 2117261 is still running..
### OS
Linux
### GPU
Nvidia
### CPU
Other
### Ollama version
0.3.5
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7927/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6375
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6375/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6375/comments
|
https://api.github.com/repos/ollama/ollama/issues/6375/events
|
https://github.com/ollama/ollama/pull/6375
| 2,468,374,308
|
PR_kwDOJ0Z1Ps54e6J3
| 6,375
|
Update Dockerfile
|
{
"login": "kallados",
"id": 79176943,
"node_id": "MDQ6VXNlcjc5MTc2OTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/79176943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kallados",
"html_url": "https://github.com/kallados",
"followers_url": "https://api.github.com/users/kallados/followers",
"following_url": "https://api.github.com/users/kallados/following{/other_user}",
"gists_url": "https://api.github.com/users/kallados/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kallados/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kallados/subscriptions",
"organizations_url": "https://api.github.com/users/kallados/orgs",
"repos_url": "https://api.github.com/users/kallados/repos",
"events_url": "https://api.github.com/users/kallados/events{/privacy}",
"received_events_url": "https://api.github.com/users/kallados/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-08-15T16:02:15
| 2024-08-15T19:50:09
| 2024-08-15T19:34:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6375",
"html_url": "https://github.com/ollama/ollama/pull/6375",
"diff_url": "https://github.com/ollama/ollama/pull/6375.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6375.patch",
"merged_at": null
}
|
ip
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6375/reactions",
"total_count": 2,
"+1": 0,
"-1": 2,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6375/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6455
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6455/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6455/comments
|
https://api.github.com/repos/ollama/ollama/issues/6455/events
|
https://github.com/ollama/ollama/pull/6455
| 2,479,245,517
|
PR_kwDOJ0Z1Ps55DKmj
| 6,455
|
Align cmake define for cuda no peer copy
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-21T22:09:03
| 2024-08-23T18:20:45
| 2024-08-23T18:20:40
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6455",
"html_url": "https://github.com/ollama/ollama/pull/6455",
"diff_url": "https://github.com/ollama/ollama/pull/6455.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6455.patch",
"merged_at": "2024-08-23T18:20:40"
}
|
This feel out of sync on recent updates to llama.cpp.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6455/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7199
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7199/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7199/comments
|
https://api.github.com/repos/ollama/ollama/issues/7199/events
|
https://github.com/ollama/ollama/pull/7199
| 2,586,687,631
|
PR_kwDOJ0Z1Ps5-k4CI
| 7,199
|
Support customized CPU flags for runners
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-10-14T17:50:25
| 2024-11-12T21:00:11
| 2024-11-12T21:00:04
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7199",
"html_url": "https://github.com/ollama/ollama/pull/7199",
"diff_url": "https://github.com/ollama/ollama/pull/7199.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7199.patch",
"merged_at": null
}
|
This implements a simplified custom CPU flags pattern for the runners. When built without overrides, the runner name contains the vector flag we check for (AVX) to ensure we don't try to run on unsupported systems and crash. If the user builds a customized set, we omit the naming scheme and don't check for compatibility. This avoids checking requirements at runtime, so that logic has been removed as well. This can be used to build GPU runners with no vector flags, or CPU/GPU runners with additional flags (e.g. AVX512) enabled.
This also cleans up some variables that were stale from the recent Go server change, as well as a few duplicate definitions from prior branch merges.
Fixes #2187
Fixes #2205
Fixes #2281
Fixes #7457
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7199/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7199/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/622
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/622/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/622/comments
|
https://api.github.com/repos/ollama/ollama/issues/622/events
|
https://github.com/ollama/ollama/issues/622
| 1,915,262,134
|
I_kwDOJ0Z1Ps5yKJS2
| 622
|
How to add a new model with a .pth file ?
|
{
"login": "GautierT",
"id": 6089653,
"node_id": "MDQ6VXNlcjYwODk2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6089653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GautierT",
"html_url": "https://github.com/GautierT",
"followers_url": "https://api.github.com/users/GautierT/followers",
"following_url": "https://api.github.com/users/GautierT/following{/other_user}",
"gists_url": "https://api.github.com/users/GautierT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GautierT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GautierT/subscriptions",
"organizations_url": "https://api.github.com/users/GautierT/orgs",
"repos_url": "https://api.github.com/users/GautierT/repos",
"events_url": "https://api.github.com/users/GautierT/events{/privacy}",
"received_events_url": "https://api.github.com/users/GautierT/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-27T10:52:47
| 2023-09-27T20:17:45
| 2023-09-27T20:17:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi.
I would like to use ollama with https://huggingface.co/manu/mistral-7B-v0.1.
The files available are :
```
tokenizer.model
consolidated.00.pth
params.json
RELEASE
```
Thanks.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/622/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1435
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1435/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1435/comments
|
https://api.github.com/repos/ollama/ollama/issues/1435/events
|
https://github.com/ollama/ollama/issues/1435
| 2,032,922,517
|
I_kwDOJ0Z1Ps55K--V
| 1,435
|
Ollama and Xeon 5660
|
{
"login": "Andreh1982",
"id": 98997584,
"node_id": "U_kgDOBeaVUA",
"avatar_url": "https://avatars.githubusercontent.com/u/98997584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Andreh1982",
"html_url": "https://github.com/Andreh1982",
"followers_url": "https://api.github.com/users/Andreh1982/followers",
"following_url": "https://api.github.com/users/Andreh1982/following{/other_user}",
"gists_url": "https://api.github.com/users/Andreh1982/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Andreh1982/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andreh1982/subscriptions",
"organizations_url": "https://api.github.com/users/Andreh1982/orgs",
"repos_url": "https://api.github.com/users/Andreh1982/repos",
"events_url": "https://api.github.com/users/Andreh1982/events{/privacy}",
"received_events_url": "https://api.github.com/users/Andreh1982/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2023-12-08T15:57:39
| 2023-12-13T23:24:50
| 2023-12-13T23:24:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello! I'm facing an issue running Ollama in a Dell T610 server with 64GB ram and Xeon 5660:
`Dec 8 15:52:15 constellation kernel: [ 1529.028302] traps: ollama-runner[2213] trap invalid opcode ip:472d79 sp:7ffdbe33f680 error:0 in ollama-runner[407000+da000]
Dec 8 15:52:15 constellation ollama[2161]: 2023/12/08 15:52:15 llama.go:436: signal: illegal instruction (core dumped)`
Please, any help in this situation? The processor is too old to run llm models? :(
|
{
"login": "Andreh1982",
"id": 98997584,
"node_id": "U_kgDOBeaVUA",
"avatar_url": "https://avatars.githubusercontent.com/u/98997584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Andreh1982",
"html_url": "https://github.com/Andreh1982",
"followers_url": "https://api.github.com/users/Andreh1982/followers",
"following_url": "https://api.github.com/users/Andreh1982/following{/other_user}",
"gists_url": "https://api.github.com/users/Andreh1982/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Andreh1982/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andreh1982/subscriptions",
"organizations_url": "https://api.github.com/users/Andreh1982/orgs",
"repos_url": "https://api.github.com/users/Andreh1982/repos",
"events_url": "https://api.github.com/users/Andreh1982/events{/privacy}",
"received_events_url": "https://api.github.com/users/Andreh1982/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1435/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2510
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2510/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2510/comments
|
https://api.github.com/repos/ollama/ollama/issues/2510/events
|
https://github.com/ollama/ollama/pull/2510
| 2,135,739,706
|
PR_kwDOJ0Z1Ps5m7ySI
| 2,510
|
update README.md - added library Ollama for SAP ABAP
|
{
"login": "b-tocs",
"id": 155617327,
"node_id": "U_kgDOCUaILw",
"avatar_url": "https://avatars.githubusercontent.com/u/155617327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/b-tocs",
"html_url": "https://github.com/b-tocs",
"followers_url": "https://api.github.com/users/b-tocs/followers",
"following_url": "https://api.github.com/users/b-tocs/following{/other_user}",
"gists_url": "https://api.github.com/users/b-tocs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/b-tocs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/b-tocs/subscriptions",
"organizations_url": "https://api.github.com/users/b-tocs/orgs",
"repos_url": "https://api.github.com/users/b-tocs/repos",
"events_url": "https://api.github.com/users/b-tocs/events{/privacy}",
"received_events_url": "https://api.github.com/users/b-tocs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-15T06:21:22
| 2024-02-22T18:12:27
| 2024-02-22T18:12:27
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2510",
"html_url": "https://github.com/ollama/ollama/pull/2510",
"diff_url": "https://github.com/ollama/ollama/pull/2510.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2510.patch",
"merged_at": "2024-02-22T18:12:27"
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2510/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2478
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2478/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2478/comments
|
https://api.github.com/repos/ollama/ollama/issues/2478/events
|
https://github.com/ollama/ollama/issues/2478
| 2,133,053,613
|
I_kwDOJ0Z1Ps5_I9Ct
| 2,478
|
Mistral instruction following doesn't work as should be when the prompt is lengthy
|
{
"login": "rsandx",
"id": 25774281,
"node_id": "MDQ6VXNlcjI1Nzc0Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/25774281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rsandx",
"html_url": "https://github.com/rsandx",
"followers_url": "https://api.github.com/users/rsandx/followers",
"following_url": "https://api.github.com/users/rsandx/following{/other_user}",
"gists_url": "https://api.github.com/users/rsandx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rsandx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rsandx/subscriptions",
"organizations_url": "https://api.github.com/users/rsandx/orgs",
"repos_url": "https://api.github.com/users/rsandx/repos",
"events_url": "https://api.github.com/users/rsandx/events{/privacy}",
"received_events_url": "https://api.github.com/users/rsandx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-02-13T20:11:43
| 2024-02-21T21:09:05
| 2024-02-16T23:08:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm working on a project using RAG, which requires a model to answer questions based on the provided context. Having tested mistral-7b-instruct-v0.2.Q5_K_M.gguf served by [llamafile](https://github.com/Mozilla-Ocho/llamafile/blob/main/llama.cpp/server/README.md) and [Ollama](https://github.com/ollama/ollama/blob/main/docs/api.md), I found that the instruction following works well for both servers when the prompt is not lengthy; but when the prompt is lengthy, the llamafile server returns proper contents (although not as good as ChatGPT's but acceptable), the Ollama server returns contents that feel like ignoring the context or hallucinating. See examples below for comparison and reproducing the results.
1. llamafile with short prompt:
curl http://localhost:8888/completion -d '{"prompt": "You'\''re assisting with questions about services offered by aiTransformer.\nUsing state-of-the-art artificial intelligence algorithms, aiTransformer can synthesize speech and generate images/videos from text; cartoonize/enhance/filter images and videos; remove and replace background in pictures and videos; enlarge photos, and transform any pictures into sketches and other painting styles in near real-time.\nUse the information from the DOCUMENTS section to provide accurate answers but act as if you knew this information innately. If unsure, simply state that you do not know.\nDOCUMENTS:\nThe Speech Synthesizer is a versatile text to speech and video tool. It provides a wide variety of natural-sounding AI voices across different languages and accents, that can be used to produce human speech from text. It also has the option to use a predefined or custom presenter, and generate a video with the presenter speaking the text you enter. With this you can easily create personalized talking greeting cards, see this video about how to generate the card. Check out more sample synthesized videos in this YouTube Channel. With support of the synthesizer, we'\''ve built VideoPlus Studio, which is an integrated video editor to help build subtitles, generate speeches and lip-synced avatars, and add them to your videos/documents/images.\nwhat'\''s Speech Synthesizer?"}'
the response:
{"content":"\nThe Speech Synthesizer is a powerful tool from aiTransformer that can convert text into human-like speech in various languages and accents. It also has the capability to generate videos with a presenter speaking the text and lip-syncing the avatar. This tool can be used to create personalized talking greeting cards, as demonstrated in this video, and is integrated with VideoPlus Studio for adding subtitles, generating speeches, and creating lip-synced avatars for videos/documents/images.","generation_settings":{"frequency_penalty":0.0,"grammar":"","ignore_eos":false,"logit_bias":[],"min_p":0.05000000074505806,"mirostat":0,"mirostat_eta":0.10000000149011612,"mirostat_tau":5.0,"model":"mistral-7b-instruct-v0.2.Q5_K_M.gguf","n_ctx":32768,"n_keep":0,"n_predict":-1,"n_probs":0,"penalize_nl":true,"penalty_prompt_tokens":[],"presence_penalty":0.0,"repeat_last_n":64,"repeat_penalty":1.100000023841858,"seed":4294967295,"stop":[],"stream":false,"temperature":0.800000011920929,"tfs_z":1.0,"top_k":40,"top_p":0.949999988079071,"typical_p":1.0,"use_penalty_prompt_tokens":false},"model":"mistral-7b-instruct-v0.2.Q5_K_M.gguf"},"tokens_cached":424,"tokens_evaluated":313,"tokens_predicted":112,"truncated":false}
2. llamafile with long prompt:
curl http://localhost:8888/completion -d '{"prompt": "You'\''re assisting with questions about services offered by aiTransformer.\nUsing state-of-the-art artificial intelligence algorithms, aiTransformer can synthesize speech and generate images/videos from text; cartoonize/enhance/filter images and videos; remove and replace background in pictures and videos; enlarge photos, and transform any pictures into sketches and other painting styles in near real-time.\nUse the information from the DOCUMENTS section to provide accurate answers but act as if you knew this information innately. If unsure, simply state that you do not know.\nDOCUMENTS:\nThe Cartoonizer transforms real-world pictures into cartoon-style images. It offers several cartoon styles to choose from, with sample effects provided, and more styles to be added over time. The tool is designed to make it easy to create fun and unique cartoon images directly from photos. For more information on the different cartoon styles, read our blog posts at here and here, and watch the demo video for usage tips.\nWith the Enhancer, you can easily enhance your photos and make them look clearer, sharper, and more professional. The tool offers several enhance types to choose from, including options to restore faces, and more types will be added over time. For more information on the different enhance types, read our blog post at here.\nThe Sketcher makes it easy to turn photos into sketches. Whether you'\''re an artist or just looking for a fun way to express your creativity, the Sketcher offers four different sketch styles, as well as two byproducts for finding the edges and smoothing the image (good for selfies). Additionally, a colored head sketch can be produced if the input image contains a face. With its simple interface, anyone can create sketches in no time, no need for any art skills or experience. So why not give it a try today and see what kind of sketches you can create!\nUsing single image super resolution algorithms, the Enlarger can upsize images at a scale of up to 8 (64 times of the original size). This can produce high-resolution images that are suitable for large prints. However, it'\''s important to note that high-quality 8x zoom tends to work on smaller images, and that larger upscaling can take longer to process.\nThe Filter adds special effects to your photos and videos, giving them a unique and special touch. You can choose from a range of filters to give your photos a different look, from subtle tweaks to bold and dramatic effects. These filters are simple to use and can be applied with just a few clicks.\nThe Stylizer transforms photos into works of art inspired by famous artists and styles. Simply provide the source image and the desired style image, the Stylizer will generate six stylized versions with varying style intensities. Whether you'\''re a professional artist or just looking to have some fun, the Stylizer is the perfect tool to turn your photos into unique and eye-catching works of art. You can select from predefined style images or upload your own custom style image. This gives you a wide range of possibilities for creating unique and interesting stylized images. Additionally, the Stylizer has an option to apply styles to the whole image or to specific regions, such as the foreground or background. While the foreground and background detection is not always accurate, experimenting can lead to some unexpected and interesting results.\nThe MultiStylizer offers a creative approach to image styling by combining multiple styles into a single image. With its semantic-aware neural network, the MultiStylizer automatically incorporates different styles based on the regional semantics of the source image, creating a unique and visually stunning result that is different from the traditional single style transfer. With the option to select a predefined or custom style for each style choice, and the ability to apply styles to the whole region or auto-detected foreground / background, the possibilities for creating stunning digital art are limitless.\nBased on the powerful deep learning, text-to-image model Stable Diffusion, the Super Stylizer can generate stunning detailed images conditioned on text descriptions, and can also use the generated images to stylize your picture with adjustable style strength. In order to produce meaningful results, it'\''s important to describe elements to be included or omitted from the output. For more information on this topic, read our blog posts at here and here. To make things easier, a list of frequently-used art styles and mediums are provided, and there is a demo video to show how the tool works. We also have a dedicated Prompt Builder to help you build text prompts intuitively or create random prompts in one click, making the process even simpler and more fun. The Prompt Builder lists 1000+ short prompts with sample images, including 500+ textual inversion terms verified to work here. Unlike some other platforms, the images you generated here are absolutely private.\nThe Prompt Builder allows you to easily create text prompts for the Stable Diffusion text-to-image model in the Super Stylizer by providing a list of special terms and often-used generic terms with sample images. This helps you build text prompts intuitively, and you can even generate random prompts with just one click or generate prompts from images. The supported terms include 500+ (and growing) special Textual Inversion terms, giving you a wider vocabulary to use in your text prompts.\nThe Background Editor can remove the background from an image, leaving only the subject of the image visible. The process uses machine learning algorithms to identify the subject and separate it from the background, allowing you to create more professional and visually appealing content while also saving time and effort. You can also swap the original background with a new one and position the foreground element in a specific location, while also setting the transparency level for both the foreground and background. For more information on this topic, read our blog posts at here and here.\nThe Speech Synthesizer is a versatile text to speech and video tool. It provides a wide variety of natural-sounding AI voices across different languages and accents, that can be used to produce human speech from text. It also has the option to use a predefined or custom presenter, and generate a video with the presenter speaking the text you enter. With this you can easily create personalized talking greeting cards, see this video about how to generate the card. Check out more sample synthesized videos in this YouTube Channel. With support of the synthesizer, we'\''ve built VideoPlus Studio, which is an integrated video editor to help build subtitles, generate speeches and lip-synced avatars, and add them to your videos/documents/images.\nVideoPlus Studio first is a free subtitle editor and translator, that allows you to add subtitles to videos, documents and images, edit and translate them to other languages. Moreover, you can select a presenter for each subtitle, the presenter has properties for turning the text into speeches in certain language and voice, and an image (optional) for generating a lip-synced avatar to speak the text; the avatar has properties like shape, size and location to control how it'\''s going to show in the video. See some use cases and sample videos on this page. Besides, the app can also apply different cartoon styles and filters to videos, as well as transcribe audio to text with automatically detected language and save the text to a subtitle file that'\''s ready to use in this app.\nIf you just want to apply special effects to videos, open the task window by clicking the '\''Task'\'' button in the menu, then click the '\''Submit New Video Effect Task'\'' button, select your video and effect and submit to process. To get a video transcription, click the '\''Submit New Video Transcription Task'\'' button, select your video and submit to process. To add/edit subtitles with presenters, you should get familiar with 3 key concepts used in this app. Subtitles: Subtitles are text representing the contents of the audio in a video, they must include information on when each line of text should appear and disappear. The subtitle editor is on the main screen besides the video/document/image player. Subtitle editing for a video is based on timeline, while it is based on page number or image frame for a document or an image. By default the main screen is loaded with this tutorial video and its subtitles, you can play around with it to learn the subtitle editing features. Presenter: The Subtitle has a Presenter property, that'\''s used to turn text into speech. A Presenter is a user defined object that has a name and voice, optionally an avatar image of certain shape, size and location in the resulting video. The presenter window is opened with the '\''Presenter'\'' button in the menu. By default each user has 2 preset presenters that can be modified. Presenters are your AI aides that speak your ideas. Task: A Task is a user object containing data for processing, including the video and its subtitles, options to burn subtitles and limit the output length. Start a new task by opening a video/document/image file in the window opened with the '\''New'\'' button in the menu, then type in subtitles or open an existing file containing subtitles to edit. The app will try to extract text on each page when you open a document. For every subtitle select a presenter, adjust the text and the starting and ending position. Once you are done with editing, open the task window and use the '\''Submit Current Task'\'' button to submit for processing. Download link to the resulting video can be found in the task history when available.\nwhat'\''s Speech Synthesizer?"}'
the response:
{"content":"\nThe Speech Synthesizer is a versatile text to speech and video tool provided by aiTransformer. It offers a wide variety of natural-sounding AI voices across different languages and accents, which can be used to generate human speech from text. Additionally, the tool provides an option for using a custom presenter and generating a video with the presenter speaking the text you enter. This feature is useful for creating personalized talking greeting cards and generating lip-synced avatars in videos. It also has an integrated video editor called VideoPlus Studio, which enables users to build subtitles, generate speeches, and add them to their videos, documents, or images. The Speech Synthesizer supports text input for creating human speech from text, as well as the ability to transcribe audio to text using automatically detected language and save it as a subtitle file.","generation_settings":{"frequency_penalty":0.0,"grammar":"","ignore_eos":false,"logit_bias":[],"min_p":0.05000000074505806,"mirostat":0,"mirostat_eta":0.10000000149011612,"mirostat_tau":5.0,"model":"mistral-7b-instruct-v0.2.Q5_K_M.gguf","n_ctx":32768,"n_keep":0,"n_predict":-1,"n_probs":0,"penalize_nl":true,"penalty_prompt_tokens":[],"presence_penalty":0.0,"repeat_last_n":64,"repeat_penalty":1.100000023841858,"seed":4294967295,"stop":[],"stream":false,"temperature":0.800000011920929,"tfs_z":1.0,"top_k":40,"top_p":0.949999988079071,"typical_p":1.0,"use_penalty_prompt_tokens":false},"model":"mistral-7b-instruct-v0.2.Q5_K_M.gguf"},"tokens_cached":2289,"tokens_evaluated":2105,"tokens_predicted":185,"truncated":false}
3. Ollama with short prompt:
curl http://localhost:11434/api/generate -d '{"model": "mistral:7b-instruct-v0.2-q5_K_M", "stream": false, "raw": true, "prompt": "You are assisting with questions about services offered by aiTransformer.\nUsing state-of-the-art artificial intelligence algorithms, aiTransformer can synthesize speech and generate images/videos from text; cartoonize/enhance/filter images and videos; remove and replace background in pictures and videos; enlarge photos, and transform any pictures into sketches and other painting styles in near real-time.\nUse the information from the DOCUMENTS section to provide accurate answers but act as if you knew this information innately. If unsure, simply state that you do not know.\nDOCUMENTS:\nThe Speech Synthesizer is a versatile text to speech and video tool. It provides a wide variety of natural-sounding AI voices across different languages and accents, that can be used to produce human speech from text. It also has the option to use a predefined or custom presenter, and generate a video with the presenter speaking the text you enter. With this you can easily create personalized talking greeting cards, see this video about how to generate the card. Check out more sample synthesized videos in this YouTube Channel. With support of the synthesizer, we have built VideoPlus Studio, which is an integrated video editor to help build subtitles, generate speeches and lip-synced avatars, and add them to your videos/documents/images.\nwhat is Speech Synthesizer?"}'
the response:
{"model":"mistral:7b-instruct-v0.2-q5_K_M","created_at":"2024-02-13T16:19:41.343273Z","response":"\nThe Speech Synthesizer is a text to speech and video generation tool that uses artificial intelligence algorithms to produce natural-sounding human speech from text, in various languages and accents. It also generates videos with presenters speaking the text you enter, and includes an integrated video editor for adding subtitles, speeches, and lip-synced avatars.","done":true,"total_duration":3020059208,"load_duration":768655125,"prompt_eval_count":310,"prompt_eval_duration":594788000,"eval_count":79,"eval_duration":1656288000}
4. Ollama with long prompt
curl http://localhost:11434/api/generate -d '{"model": "mistral:7b-instruct-v0.2-q5_K_M", "stream": false, "raw": true, "prompt": "You'\''re assisting with questions about services offered by aiTransformer.\nUsing state-of-the-art artificial intelligence algorithms, aiTransformer can synthesize speech and generate images/videos from text; cartoonize/enhance/filter images and videos; remove and replace background in pictures and videos; enlarge photos, and transform any pictures into sketches and other painting styles in near real-time.\nUse the information from the DOCUMENTS section to provide accurate answers but act as if you knew this information innately. If unsure, simply state that you do not know.\nDOCUMENTS:\nThe Cartoonizer transforms real-world pictures into cartoon-style images. It offers several cartoon styles to choose from, with sample effects provided, and more styles to be added over time. The tool is designed to make it easy to create fun and unique cartoon images directly from photos. For more information on the different cartoon styles, read our blog posts at here and here, and watch the demo video for usage tips.\nWith the Enhancer, you can easily enhance your photos and make them look clearer, sharper, and more professional. The tool offers several enhance types to choose from, including options to restore faces, and more types will be added over time. For more information on the different enhance types, read our blog post at here.\nThe Sketcher makes it easy to turn photos into sketches. Whether you'\''re an artist or just looking for a fun way to express your creativity, the Sketcher offers four different sketch styles, as well as two byproducts for finding the edges and smoothing the image (good for selfies). Additionally, a colored head sketch can be produced if the input image contains a face. With its simple interface, anyone can create sketches in no time, no need for any art skills or experience. So why not give it a try today and see what kind of sketches you can create!\nUsing single image super resolution algorithms, the Enlarger can upsize images at a scale of up to 8 (64 times of the original size). This can produce high-resolution images that are suitable for large prints. However, it'\''s important to note that high-quality 8x zoom tends to work on smaller images, and that larger upscaling can take longer to process.\nThe Filter adds special effects to your photos and videos, giving them a unique and special touch. You can choose from a range of filters to give your photos a different look, from subtle tweaks to bold and dramatic effects. These filters are simple to use and can be applied with just a few clicks.\nThe Stylizer transforms photos into works of art inspired by famous artists and styles. Simply provide the source image and the desired style image, the Stylizer will generate six stylized versions with varying style intensities. Whether you'\''re a professional artist or just looking to have some fun, the Stylizer is the perfect tool to turn your photos into unique and eye-catching works of art. You can select from predefined style images or upload your own custom style image. This gives you a wide range of possibilities for creating unique and interesting stylized images. Additionally, the Stylizer has an option to apply styles to the whole image or to specific regions, such as the foreground or background. While the foreground and background detection is not always accurate, experimenting can lead to some unexpected and interesting results.\nThe MultiStylizer offers a creative approach to image styling by combining multiple styles into a single image. With its semantic-aware neural network, the MultiStylizer automatically incorporates different styles based on the regional semantics of the source image, creating a unique and visually stunning result that is different from the traditional single style transfer. With the option to select a predefined or custom style for each style choice, and the ability to apply styles to the whole region or auto-detected foreground / background, the possibilities for creating stunning digital art are limitless.\nBased on the powerful deep learning, text-to-image model Stable Diffusion, the Super Stylizer can generate stunning detailed images conditioned on text descriptions, and can also use the generated images to stylize your picture with adjustable style strength. In order to produce meaningful results, it'\''s important to describe elements to be included or omitted from the output. For more information on this topic, read our blog posts at here and here. To make things easier, a list of frequently-used art styles and mediums are provided, and there is a demo video to show how the tool works. We also have a dedicated Prompt Builder to help you build text prompts intuitively or create random prompts in one click, making the process even simpler and more fun. The Prompt Builder lists 1000+ short prompts with sample images, including 500+ textual inversion terms verified to work here. Unlike some other platforms, the images you generated here are absolutely private.\nThe Prompt Builder allows you to easily create text prompts for the Stable Diffusion text-to-image model in the Super Stylizer by providing a list of special terms and often-used generic terms with sample images. This helps you build text prompts intuitively, and you can even generate random prompts with just one click or generate prompts from images. The supported terms include 500+ (and growing) special Textual Inversion terms, giving you a wider vocabulary to use in your text prompts.\nThe Background Editor can remove the background from an image, leaving only the subject of the image visible. The process uses machine learning algorithms to identify the subject and separate it from the background, allowing you to create more professional and visually appealing content while also saving time and effort. You can also swap the original background with a new one and position the foreground element in a specific location, while also setting the transparency level for both the foreground and background. For more information on this topic, read our blog posts at here and here.\nThe Speech Synthesizer is a versatile text to speech and video tool. It provides a wide variety of natural-sounding AI voices across different languages and accents, that can be used to produce human speech from text. It also has the option to use a predefined or custom presenter, and generate a video with the presenter speaking the text you enter. With this you can easily create personalized talking greeting cards, see this video about how to generate the card. Check out more sample synthesized videos in this YouTube Channel. With support of the synthesizer, we'\''ve built VideoPlus Studio, which is an integrated video editor to help build subtitles, generate speeches and lip-synced avatars, and add them to your videos/documents/images.\nVideoPlus Studio first is a free subtitle editor and translator, that allows you to add subtitles to videos, documents and images, edit and translate them to other languages. Moreover, you can select a presenter for each subtitle, the presenter has properties for turning the text into speeches in certain language and voice, and an image (optional) for generating a lip-synced avatar to speak the text; the avatar has properties like shape, size and location to control how it'\''s going to show in the video. See some use cases and sample videos on this page. Besides, the app can also apply different cartoon styles and filters to videos, as well as transcribe audio to text with automatically detected language and save the text to a subtitle file that'\''s ready to use in this app.\nIf you just want to apply special effects to videos, open the task window by clicking the '\''Task'\'' button in the menu, then click the '\''Submit New Video Effect Task'\'' button, select your video and effect and submit to process. To get a video transcription, click the '\''Submit New Video Transcription Task'\'' button, select your video and submit to process. To add/edit subtitles with presenters, you should get familiar with 3 key concepts used in this app. Subtitles: Subtitles are text representing the contents of the audio in a video, they must include information on when each line of text should appear and disappear. The subtitle editor is on the main screen besides the video/document/image player. Subtitle editing for a video is based on timeline, while it is based on page number or image frame for a document or an image. By default the main screen is loaded with this tutorial video and its subtitles, you can play around with it to learn the subtitle editing features. Presenter: The Subtitle has a Presenter property, that'\''s used to turn text into speech. A Presenter is a user defined object that has a name and voice, optionally an avatar image of certain shape, size and location in the resulting video. The presenter window is opened with the '\''Presenter'\'' button in the menu. By default each user has 2 preset presenters that can be modified. Presenters are your AI aides that speak your ideas. Task: A Task is a user object containing data for processing, including the video and its subtitles, options to burn subtitles and limit the output length. Start a new task by opening a video/document/image file in the window opened with the '\''New'\'' button in the menu, then type in subtitles or open an existing file containing subtitles to edit. The app will try to extract text on each page when you open a document. For every subtitle select a presenter, adjust the text and the starting and ending position. Once you are done with editing, open the task window and use the '\''Submit Current Task'\'' button to submit for processing. Download link to the resulting video can be found in the task history when available.\nwhat'\''s Speech Synthesizer?"}'
the response:
{"model":"mistral:7b-instruct-v0.2-q5_K_M","created_at":"2024-02-13T16:22:20.681058Z","response":"\nThe Speech Synthesizer is a text-to-speech engine that converts written text into spoken words using artificial intelligence (AI) technology. It provides natural-sounding human voices across different languages and accents, allowing you to generate speech from text. Additionally, it supports the option to use a predefined or custom presenter, and generate a video with the presenter speaking the text you enter.\nwhat's VideoPlus Studio?\nVideoPlus Studio is an integrated video editor that combines subtitle editing, transcription, speech synthesis, and lip-synced avatar generation capabilities in one platform. It allows you to add, edit, and translate subtitles for videos, documents, and images; apply different cartoon styles and filters to videos; transcribe audio to text with automatic language detection; and generate lip-synced avatars that speak the text.\nWhat can I do with Textual Inversion?\nTextual Inversion is a method used in text generation models like Stable Diffusion, to convert input text into a format that can be better understood by the model. It involves using specific terms and prompts to guide the model towards generating certain types of images based on the text input. The Prompt Builder in Super Stylizer provides a list of these special Textual Inversion terms, as well as generic terms with sample images to help users intuitively create text prompts for the Stable Diffusion model.\nWhat is the Background Editor used for?\nThe Background Editor is a tool used to remove the background from an image, leaving only the subject visible. It uses machine learning algorithms to identify and separate the subject from the background, allowing you to create more professional and visually appealing content while also saving time and effort. You can also swap the original background with a new one and position the foreground element in a specific location, as well as set the transparency level for both the foreground and background.","done":true,"total_duration":11418135292,"load_duration":12856709,"prompt_eval_count":1081,"prompt_eval_duration":2257951000,"eval_count":405,"eval_duration":9145661000}
Note the test used the simple completion endpoint for both servers with the same short/long prompt and default server settings, and presumably running the same mistral model (the file names have slight difference). Since the llamafile server's response to the long prompt is appropriate, we can't blame the model for not being able to follow instructions well in this case, I'm not sure why the Ollama server gives the weird response, please investigate. I also tried some other long prompts as context and the Ollama server sometimes gave a response totally outside the context, while the llamafile server always took into account the context.
|
{
"login": "rsandx",
"id": 25774281,
"node_id": "MDQ6VXNlcjI1Nzc0Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/25774281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rsandx",
"html_url": "https://github.com/rsandx",
"followers_url": "https://api.github.com/users/rsandx/followers",
"following_url": "https://api.github.com/users/rsandx/following{/other_user}",
"gists_url": "https://api.github.com/users/rsandx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rsandx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rsandx/subscriptions",
"organizations_url": "https://api.github.com/users/rsandx/orgs",
"repos_url": "https://api.github.com/users/rsandx/repos",
"events_url": "https://api.github.com/users/rsandx/events{/privacy}",
"received_events_url": "https://api.github.com/users/rsandx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2478/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2478/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5477
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5477/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5477/comments
|
https://api.github.com/repos/ollama/ollama/issues/5477/events
|
https://github.com/ollama/ollama/issues/5477
| 2,390,011,345
|
I_kwDOJ0Z1Ps6OdK3R
| 5,477
|
How to setting N GPU usage?
|
{
"login": "CaoYunzhou",
"id": 28099773,
"node_id": "MDQ6VXNlcjI4MDk5Nzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/28099773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaoYunzhou",
"html_url": "https://github.com/CaoYunzhou",
"followers_url": "https://api.github.com/users/CaoYunzhou/followers",
"following_url": "https://api.github.com/users/CaoYunzhou/following{/other_user}",
"gists_url": "https://api.github.com/users/CaoYunzhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaoYunzhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaoYunzhou/subscriptions",
"organizations_url": "https://api.github.com/users/CaoYunzhou/orgs",
"repos_url": "https://api.github.com/users/CaoYunzhou/repos",
"events_url": "https://api.github.com/users/CaoYunzhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaoYunzhou/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-04T05:57:37
| 2024-07-16T10:44:05
| 2024-07-04T07:23:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
why use one GPU
I have four gpu device

### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "CaoYunzhou",
"id": 28099773,
"node_id": "MDQ6VXNlcjI4MDk5Nzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/28099773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaoYunzhou",
"html_url": "https://github.com/CaoYunzhou",
"followers_url": "https://api.github.com/users/CaoYunzhou/followers",
"following_url": "https://api.github.com/users/CaoYunzhou/following{/other_user}",
"gists_url": "https://api.github.com/users/CaoYunzhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaoYunzhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaoYunzhou/subscriptions",
"organizations_url": "https://api.github.com/users/CaoYunzhou/orgs",
"repos_url": "https://api.github.com/users/CaoYunzhou/repos",
"events_url": "https://api.github.com/users/CaoYunzhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaoYunzhou/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5477/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7129
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7129/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7129/comments
|
https://api.github.com/repos/ollama/ollama/issues/7129/events
|
https://github.com/ollama/ollama/pull/7129
| 2,572,808,365
|
PR_kwDOJ0Z1Ps597mvx
| 7,129
|
Change -j8 to --parallel in gen_common.sh
|
{
"login": "victorb",
"id": 459764,
"node_id": "MDQ6VXNlcjQ1OTc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/459764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/victorb",
"html_url": "https://github.com/victorb",
"followers_url": "https://api.github.com/users/victorb/followers",
"following_url": "https://api.github.com/users/victorb/following{/other_user}",
"gists_url": "https://api.github.com/users/victorb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/victorb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/victorb/subscriptions",
"organizations_url": "https://api.github.com/users/victorb/orgs",
"repos_url": "https://api.github.com/users/victorb/repos",
"events_url": "https://api.github.com/users/victorb/events{/privacy}",
"received_events_url": "https://api.github.com/users/victorb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-10-08T10:41:45
| 2024-11-21T16:26:00
| 2024-11-21T16:25:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7129",
"html_url": "https://github.com/ollama/ollama/pull/7129",
"diff_url": "https://github.com/ollama/ollama/pull/7129.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7129.patch",
"merged_at": null
}
|
-j8 makes the build process use 8 jobs no matter how many cores you have. Instead, --parallel will use all available cores, depending on the setup.
This change will change nothing for people with 8 cores, while making the build process faster for people with more or less than 8 cores.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7129/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1516
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1516/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1516/comments
|
https://api.github.com/repos/ollama/ollama/issues/1516/events
|
https://github.com/ollama/ollama/issues/1516
| 2,041,014,275
|
I_kwDOJ0Z1Ps55p2gD
| 1,516
|
Better reports "Out of memory"
|
{
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users/igorschlum/followers",
"following_url": "https://api.github.com/users/igorschlum/following{/other_user}",
"gists_url": "https://api.github.com/users/igorschlum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/igorschlum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/igorschlum/subscriptions",
"organizations_url": "https://api.github.com/users/igorschlum/orgs",
"repos_url": "https://api.github.com/users/igorschlum/repos",
"events_url": "https://api.github.com/users/igorschlum/events{/privacy}",
"received_events_url": "https://api.github.com/users/igorschlum/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-14T06:36:44
| 2024-01-08T21:42:02
| 2024-01-08T21:42:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Lot of Users don't understand they are facing a memory error.
It could be nice to explain in the error message that it is a memory error.
Error: llama runner process has terminated
Could be replace by:
Error: Llama process ran out of memory.
Or
Error, Ollama could not run the model because it ran out of memory.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1516/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5647
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5647/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5647/comments
|
https://api.github.com/repos/ollama/ollama/issues/5647/events
|
https://github.com/ollama/ollama/issues/5647
| 2,404,940,699
|
I_kwDOJ0Z1Ps6PWHub
| 5,647
|
glm4 直接报错了
|
{
"login": "tqangxl",
"id": 9669944,
"node_id": "MDQ6VXNlcjk2Njk5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9669944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqangxl",
"html_url": "https://github.com/tqangxl",
"followers_url": "https://api.github.com/users/tqangxl/followers",
"following_url": "https://api.github.com/users/tqangxl/following{/other_user}",
"gists_url": "https://api.github.com/users/tqangxl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tqangxl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tqangxl/subscriptions",
"organizations_url": "https://api.github.com/users/tqangxl/orgs",
"repos_url": "https://api.github.com/users/tqangxl/repos",
"events_url": "https://api.github.com/users/tqangxl/events{/privacy}",
"received_events_url": "https://api.github.com/users/tqangxl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-07-12T07:28:26
| 2024-07-12T15:55:39
| 2024-07-12T15:55:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?


[WLC-SWISSLOG-SH-haiyan-trap-log.txt](https://github.com/user-attachments/files/16190214/WLC-SWISSLOG-SH-haiyan-trap-log.txt)
2024/07/12 09:21:02 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:D:\\Lib\\Dev\\AI\\ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\James\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-12T09:21:02.650+08:00 level=INFO source=images.go:751 msg="total blobs: 87"
time=2024-07-12T09:21:02.653+08:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0"
time=2024-07-12T09:21:02.654+08:00 level=INFO source=routes.go:1080 msg="Listening on [::]:11434 (version 0.2.1)"
time=2024-07-12T09:21:02.657+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]"
time=2024-07-12T09:21:02.657+08:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-12T09:21:02.831+08:00 level=INFO source=types.go:103 msg="inference compute" id=GPU-7ace657e-48c4-dfe6-058c-7307a0ea5112 library=cuda compute=7.5 driver=12.5 name="NVIDIA GeForce RTX 2070 with Max-Q Design" total="8.0 GiB" available="7.0 GiB"
[GIN] 2024/07/12 - 12:37:43 | 404 | 3.4748ms | 127.0.0.1 | POST "/api/show"
### OS
设备名称 DESKTOP-DOE0ADN
处理器 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz 2.59 GHz
机带 RAM 40.0 GB (39.9 GB 可用)
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
ollama -v
ollama version is 0.2.1
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5647/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/805
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/805/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/805/comments
|
https://api.github.com/repos/ollama/ollama/issues/805/events
|
https://github.com/ollama/ollama/issues/805
| 1,945,531,101
|
I_kwDOJ0Z1Ps5z9nLd
| 805
|
Where are the role names specified?
|
{
"login": "louisabraham",
"id": 13174805,
"node_id": "MDQ6VXNlcjEzMTc0ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/13174805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louisabraham",
"html_url": "https://github.com/louisabraham",
"followers_url": "https://api.github.com/users/louisabraham/followers",
"following_url": "https://api.github.com/users/louisabraham/following{/other_user}",
"gists_url": "https://api.github.com/users/louisabraham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louisabraham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louisabraham/subscriptions",
"organizations_url": "https://api.github.com/users/louisabraham/orgs",
"repos_url": "https://api.github.com/users/louisabraham/repos",
"events_url": "https://api.github.com/users/louisabraham/events{/privacy}",
"received_events_url": "https://api.github.com/users/louisabraham/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-16T15:35:27
| 2023-10-31T14:22:22
| 2023-10-25T19:40:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Different models have different names for the roles (eg USER, ASSISTANT, AI).
This is how llama.cpp handles them:
- https://github.com/ggerganov/llama.cpp/blob/11bff290458f12f020b588792707f76ec658a27a/examples/chat-vicuna.sh
- https://github.com/ggerganov/llama.cpp/blob/11bff290458f12f020b588792707f76ec658a27a/examples/chat-13B.sh
I didn't find any specification of reverse-prompt anywhere. Do you know if it is used?
I found templates, like
```
USER: {{ .Prompt }}
ASSISTANT:
```
for Vicuna and
```
[INST] {{ .Prompt }} [/INST]
```
for Mistral.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/805/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/805/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3097
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3097/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3097/comments
|
https://api.github.com/repos/ollama/ollama/issues/3097/events
|
https://github.com/ollama/ollama/issues/3097
| 2,183,478,530
|
I_kwDOJ0Z1Ps6CJT0C
| 3,097
|
Ollama CUDA on Ubuntu Issue
|
{
"login": "frankmedia",
"id": 38296939,
"node_id": "MDQ6VXNlcjM4Mjk2OTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/38296939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankmedia",
"html_url": "https://github.com/frankmedia",
"followers_url": "https://api.github.com/users/frankmedia/followers",
"following_url": "https://api.github.com/users/frankmedia/following{/other_user}",
"gists_url": "https://api.github.com/users/frankmedia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankmedia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankmedia/subscriptions",
"organizations_url": "https://api.github.com/users/frankmedia/orgs",
"repos_url": "https://api.github.com/users/frankmedia/repos",
"events_url": "https://api.github.com/users/frankmedia/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankmedia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-03-13T09:14:02
| 2024-04-12T22:20:12
| 2024-04-12T22:20:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Ollama runs for about 10 - 15 minutes and then it stops due some CUDA issue.
Mar 13 04:27:53 marco-All-Series ollama[886]: [GIN] 2024/03/13 - 04:27:53 | 200 | 1m18s | 192.168.1.186 | POST "/api/generate"
Mar 13 04:27:54 marco-All-Series ollama[886]: {"function":"launch_slot_with_data","level":"INFO","line":823,"msg":"slot is processing task","slot_id":0,"task_id":4327,"tid":"139815765968640","timestamp":1710318474}
Mar 13 04:27:54 marco-All-Series ollama[886]: {"function":"update_slots","level":"INFO","line":1796,"msg":"slot progression","n_past":86,"n_prompt_tokens_processed":149,"slot_id":0,"task_id":4327,"tid":"139815765968640","timestamp":1710318474}
Mar 13 04:27:54 marco-All-Series ollama[886]: {"function":"update_slots","level":"INFO","line":1821,"msg":"kv cache rm [p0, end)","p0":86,"slot_id":0,"task_id":4327,"tid":"139815765968640","timestamp":1710318474}
Mar 13 04:28:19 marco-All-Series ollama[886]: CUDA error: unknown error
Mar 13 04:28:19 marco-All-Series ollama[886]: current device: 0, in function ggml_backend_cuda_get_tensor_async at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:12080
Mar 13 04:28:19 marco-All-Series ollama[886]: cudaMemcpyAsync(data, (const char *)tensor->data + offset, size, cudaMemcpyDeviceToHost, g_cudaStreams[cuda_ctx->device][0])
Mar 13 04:28:19 marco-All-Series ollama[886]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error"
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 905]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 906]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 907]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 908]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 909]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 917]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 918]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 919]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 920]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 921]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 922]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 923]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 1368]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 1370]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 1371]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [New LWP 1372]
Mar 13 04:28:19 marco-All-Series ollama[1385]: [Thread debugging using libthread_db enabled]
Mar 13 04:28:19 marco-All-Series ollama[1385]: Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Mar 13 04:28:19 marco-All-Series ollama[1385]: 0x00000000004700a3 in ?? ()
Mar 13 04:28:19 marco-All-Series ollama[1385]: #0 0x00000000004700a3 in ?? ()
Mar 13 04:28:19 marco-All-Series ollama[1385]: #1 0x0000000000437eb0 in ?? ()
Mar 13 04:28:19 marco-All-Series ollama[1385]: #2 0x0000000011b3a7e8 in ?? ()
Mar 13 04:28:19 marco-All-Series ollama[1385]: #3 0x0000000000000080 in ?? ()
Mar 13 04:28:19 marco-All-Series ollama[1385]: #4 0x0000000000000000 in ?? ()
Mar 13 04:28:19 marco-All-Series ollama[1385]: [Inferior 1 (process 886) detached]
Also nvidia-smi doesn't show any valid GPU I have a Nvidia Tesla T4 installed. Which works perfectly fine on start up before ollama kicks in.
$ nvidia-smi
Unable to determine the device handle for GPU 0000:01:00.0: Unknown Error
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3097/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3097/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3969
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3969/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3969/comments
|
https://api.github.com/repos/ollama/ollama/issues/3969/events
|
https://github.com/ollama/ollama/issues/3969
| 2,266,691,846
|
I_kwDOJ0Z1Ps6HGvkG
| 3,969
|
cuda subprocess exits immediately with host cuda library in path
|
{
"login": "makeryangcom",
"id": 156150246,
"node_id": "U_kgDOCU6p5g",
"avatar_url": "https://avatars.githubusercontent.com/u/156150246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makeryangcom",
"html_url": "https://github.com/makeryangcom",
"followers_url": "https://api.github.com/users/makeryangcom/followers",
"following_url": "https://api.github.com/users/makeryangcom/following{/other_user}",
"gists_url": "https://api.github.com/users/makeryangcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makeryangcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makeryangcom/subscriptions",
"organizations_url": "https://api.github.com/users/makeryangcom/orgs",
"repos_url": "https://api.github.com/users/makeryangcom/repos",
"events_url": "https://api.github.com/users/makeryangcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/makeryangcom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2024-04-27T01:21:12
| 2024-05-06T01:16:47
| 2024-05-06T01:10:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Currently, I start the service via the ollama server and then use the ollamajs API to call related models, such as llama3:8b. In the actual conversation process, I find that the GPU resources don’t seem to be utilized, with the GPU usage rate almost at 1%. This is very confusing to me. What preparations should I make to ensure the GPU is utilized?
Even when I switch to llama3:70b, there is no change in GPU usage, it just causes my memory to be filled up.
```
const chat_stream = await data.value.ollama.chat({
model: "llama3:8b",
messages: message,
stream: true,
});
for await (const part of chat_stream) {
...
}
```
CPU:Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 2.80 GHz
GPU:NVIDIA GeForce GTX 1060 5G
Memory:32GB

### OS
Windows
### GPU
Nvidia
### CPU
Intel, AMD
### Ollama version
0.1.32
|
{
"login": "makeryangcom",
"id": 156150246,
"node_id": "U_kgDOCU6p5g",
"avatar_url": "https://avatars.githubusercontent.com/u/156150246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makeryangcom",
"html_url": "https://github.com/makeryangcom",
"followers_url": "https://api.github.com/users/makeryangcom/followers",
"following_url": "https://api.github.com/users/makeryangcom/following{/other_user}",
"gists_url": "https://api.github.com/users/makeryangcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makeryangcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makeryangcom/subscriptions",
"organizations_url": "https://api.github.com/users/makeryangcom/orgs",
"repos_url": "https://api.github.com/users/makeryangcom/repos",
"events_url": "https://api.github.com/users/makeryangcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/makeryangcom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3969/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3969/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3072
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3072/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3072/comments
|
https://api.github.com/repos/ollama/ollama/issues/3072/events
|
https://github.com/ollama/ollama/issues/3072
| 2,180,928,239
|
I_kwDOJ0Z1Ps6B_lLv
| 3,072
|
API not working on Windows
|
{
"login": "WiseMarius",
"id": 25198837,
"node_id": "MDQ6VXNlcjI1MTk4ODM3",
"avatar_url": "https://avatars.githubusercontent.com/u/25198837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WiseMarius",
"html_url": "https://github.com/WiseMarius",
"followers_url": "https://api.github.com/users/WiseMarius/followers",
"following_url": "https://api.github.com/users/WiseMarius/following{/other_user}",
"gists_url": "https://api.github.com/users/WiseMarius/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WiseMarius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WiseMarius/subscriptions",
"organizations_url": "https://api.github.com/users/WiseMarius/orgs",
"repos_url": "https://api.github.com/users/WiseMarius/repos",
"events_url": "https://api.github.com/users/WiseMarius/events{/privacy}",
"received_events_url": "https://api.github.com/users/WiseMarius/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-12T07:35:47
| 2024-09-08T21:39:51
| 2024-03-12T07:40:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi there,
I was just trying to run ollama on Windows but the API somehow does not work.


|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3072/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7076
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7076/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7076/comments
|
https://api.github.com/repos/ollama/ollama/issues/7076/events
|
https://github.com/ollama/ollama/issues/7076
| 2,560,992,750
|
I_kwDOJ0Z1Ps6YpaXu
| 7,076
|
install fail on void linux distro
|
{
"login": "malv-c",
"id": 19170213,
"node_id": "MDQ6VXNlcjE5MTcwMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/19170213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/malv-c",
"html_url": "https://github.com/malv-c",
"followers_url": "https://api.github.com/users/malv-c/followers",
"following_url": "https://api.github.com/users/malv-c/following{/other_user}",
"gists_url": "https://api.github.com/users/malv-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/malv-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/malv-c/subscriptions",
"organizations_url": "https://api.github.com/users/malv-c/orgs",
"repos_url": "https://api.github.com/users/malv-c/repos",
"events_url": "https://api.github.com/users/malv-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/malv-c/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2024-10-02T08:47:19
| 2024-10-07T14:34:00
| 2024-10-07T14:33:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
% curl -fsSL https://ollama.com/install.sh | sh
>>> Installing ollama to /usr/local
Password:
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Downloading Linux ROCm amd64 bundle
######################################################################## 100.0%
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
>>> AMD GPU ready.
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
/tmp
% ollama
bash: /usr/local/bin/ollama: cannot execute: required file not found
### OS
Linux
### GPU
_No response_
### CPU
AMD
### Ollama version
none yet
|
{
"login": "malv-c",
"id": 19170213,
"node_id": "MDQ6VXNlcjE5MTcwMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/19170213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/malv-c",
"html_url": "https://github.com/malv-c",
"followers_url": "https://api.github.com/users/malv-c/followers",
"following_url": "https://api.github.com/users/malv-c/following{/other_user}",
"gists_url": "https://api.github.com/users/malv-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/malv-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/malv-c/subscriptions",
"organizations_url": "https://api.github.com/users/malv-c/orgs",
"repos_url": "https://api.github.com/users/malv-c/repos",
"events_url": "https://api.github.com/users/malv-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/malv-c/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7076/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5767
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5767/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5767/comments
|
https://api.github.com/repos/ollama/ollama/issues/5767/events
|
https://github.com/ollama/ollama/issues/5767
| 2,416,106,741
|
I_kwDOJ0Z1Ps6QAtz1
| 5,767
|
Ollama v0.2.+ with phi3:mini increased RAM consumption compared to 0.1.48
|
{
"login": "TomMalow",
"id": 13435947,
"node_id": "MDQ6VXNlcjEzNDM1OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/13435947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TomMalow",
"html_url": "https://github.com/TomMalow",
"followers_url": "https://api.github.com/users/TomMalow/followers",
"following_url": "https://api.github.com/users/TomMalow/following{/other_user}",
"gists_url": "https://api.github.com/users/TomMalow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TomMalow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomMalow/subscriptions",
"organizations_url": "https://api.github.com/users/TomMalow/orgs",
"repos_url": "https://api.github.com/users/TomMalow/repos",
"events_url": "https://api.github.com/users/TomMalow/events{/privacy}",
"received_events_url": "https://api.github.com/users/TomMalow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-07-18T11:22:43
| 2024-07-22T23:41:38
| 2024-07-22T23:41:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Currently working in a project where we are integrating with LLM's and using Ollama with phi3:mini model in a container as a local testing environment. The project was initially using version 0.1.48 which can run on a fairly small vm, perfect for local testing, only taking 2.8 gb of RAM. However after upgrading to v0.2, Ollama now requires at least 5.6 gb of RAM to run the same model. That is an increase of 2.8 gb to run the same model between 0.1.48 and 0.2.6. Same issue for all 0.2.+, but only from 0.2.4 is the issue probably reported. Almost looks like the model is loaded twice into memory
The container running without the model only draws 28mb of ram.
Have a coworker who runs the same project on windows and does not see the same increase in RAM usage between the different models.
### OS
macOS
### GPU
Apple
### CPU
Apple silicon M3
### Ollama version
0.2.6
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5767/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5272
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5272/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5272/comments
|
https://api.github.com/repos/ollama/ollama/issues/5272/events
|
https://github.com/ollama/ollama/issues/5272
| 2,371,909,860
|
I_kwDOJ0Z1Ps6NYHjk
| 5,272
|
keep_alive and OLLAMA_KEEP_ALIVE not effective
|
{
"login": "peanutfs",
"id": 11516401,
"node_id": "MDQ6VXNlcjExNTE2NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/11516401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peanutfs",
"html_url": "https://github.com/peanutfs",
"followers_url": "https://api.github.com/users/peanutfs/followers",
"following_url": "https://api.github.com/users/peanutfs/following{/other_user}",
"gists_url": "https://api.github.com/users/peanutfs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peanutfs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peanutfs/subscriptions",
"organizations_url": "https://api.github.com/users/peanutfs/orgs",
"repos_url": "https://api.github.com/users/peanutfs/repos",
"events_url": "https://api.github.com/users/peanutfs/events{/privacy}",
"received_events_url": "https://api.github.com/users/peanutfs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 13
| 2024-06-25T07:27:45
| 2024-12-29T17:03:33
| 2024-07-03T22:34:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
* After Ollama starts the qwen2-72b model, if there is no interaction for about 5 minutes, the graphics memory will be automatically released, causing the model port process to automatically exit.
* I want the model to continue to exist, so I tried setting OLLAMA_KEEP_ALIVE=-1 in ollama.service, and also setting keep-alive=-1 when calling the interface. However, it does not take effect. I also tried setting keep_alive=24h with `ollama run qwen2:72b --keepalive 24h`, but it didn't work either.
* I used nvidia-smi to check and there were no running processes.
* The graphics card is NVIDIA GeForce RTX 3090 24G * 8
* CUDA Version: 12.5
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.44
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5272/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5272/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1368
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1368/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1368/comments
|
https://api.github.com/repos/ollama/ollama/issues/1368/events
|
https://github.com/ollama/ollama/issues/1368
| 2,022,826,244
|
I_kwDOJ0Z1Ps54keEE
| 1,368
|
Different behavior between running on the host versus running on GPUs.
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2023-12-04T01:10:23
| 2024-02-01T23:15:06
| 2024-02-01T23:15:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When running on the GPUs (one or more) the output is either one character or one, seemingly, unrelated word and then lines of '#"
It does it for a while. Sometimes it gets an exception in cuBLAS error 15 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7586
But the same model(s) run on my host and produce output.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1368/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4067
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4067/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4067/comments
|
https://api.github.com/repos/ollama/ollama/issues/4067/events
|
https://github.com/ollama/ollama/pull/4067
| 2,272,665,699
|
PR_kwDOJ0Z1Ps5uNOTa
| 4,067
|
Add CUDA Driver API for GPU discovery
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-30T23:44:18
| 2024-05-06T20:30:30
| 2024-05-06T20:30:27
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4067",
"html_url": "https://github.com/ollama/ollama/pull/4067",
"diff_url": "https://github.com/ollama/ollama/pull/4067.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4067.patch",
"merged_at": "2024-05-06T20:30:27"
}
|
We're seeing some corner cases with cudart which might be resolved by switching to the driver API which comes bundled with the driver package.
I've verified this works on Windows, linux x86 host and container, and linux arm (jetson)
Fixes #4008
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4067/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4067/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8109
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8109/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8109/comments
|
https://api.github.com/repos/ollama/ollama/issues/8109/events
|
https://github.com/ollama/ollama/issues/8109
| 2,740,875,032
|
I_kwDOJ0Z1Ps6jXm8Y
| 8,109
|
Edit a saved models parameters
|
{
"login": "belfie13",
"id": 39270867,
"node_id": "MDQ6VXNlcjM5MjcwODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/39270867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/belfie13",
"html_url": "https://github.com/belfie13",
"followers_url": "https://api.github.com/users/belfie13/followers",
"following_url": "https://api.github.com/users/belfie13/following{/other_user}",
"gists_url": "https://api.github.com/users/belfie13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/belfie13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/belfie13/subscriptions",
"organizations_url": "https://api.github.com/users/belfie13/orgs",
"repos_url": "https://api.github.com/users/belfie13/repos",
"events_url": "https://api.github.com/users/belfie13/events{/privacy}",
"received_events_url": "https://api.github.com/users/belfie13/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-12-15T20:31:55
| 2024-12-20T13:40:27
| 2024-12-17T19:32:23
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently you would have to create a new model to change any parameters and it's not possible to edit the template at all.
I'm not sure what use the `ollama cp` function has as you can't edit during or after copying so you end up with two identical models.
Would be great to `create` a model, then be able to edit the models `parameters`, `template`, `system` etc... without having to load it and save
you can set the `system` and `parameters` while it's loaded, why not have the ability to edit the models in `ollama list`
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8109/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3918
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3918/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3918/comments
|
https://api.github.com/repos/ollama/ollama/issues/3918/events
|
https://github.com/ollama/ollama/pull/3918
| 2,264,227,556
|
PR_kwDOJ0Z1Ps5twqC6
| 3,918
|
llm: limit generation to 10x context size to avoid run on generations
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-25T18:20:21
| 2024-04-25T23:02:31
| 2024-04-25T23:02:30
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3918",
"html_url": "https://github.com/ollama/ollama/pull/3918",
"diff_url": "https://github.com/ollama/ollama/pull/3918.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3918.patch",
"merged_at": "2024-04-25T23:02:30"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3918/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8084
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8084/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8084/comments
|
https://api.github.com/repos/ollama/ollama/issues/8084/events
|
https://github.com/ollama/ollama/pull/8084
| 2,737,769,971
|
PR_kwDOJ0Z1Ps6FHWRc
| 8,084
|
ollama webui for local docker deployment
|
{
"login": "oslook",
"id": 6346865,
"node_id": "MDQ6VXNlcjYzNDY4NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6346865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oslook",
"html_url": "https://github.com/oslook",
"followers_url": "https://api.github.com/users/oslook/followers",
"following_url": "https://api.github.com/users/oslook/following{/other_user}",
"gists_url": "https://api.github.com/users/oslook/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oslook/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oslook/subscriptions",
"organizations_url": "https://api.github.com/users/oslook/orgs",
"repos_url": "https://api.github.com/users/oslook/repos",
"events_url": "https://api.github.com/users/oslook/events{/privacy}",
"received_events_url": "https://api.github.com/users/oslook/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-12-13T08:22:30
| 2024-12-23T18:09:57
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8084",
"html_url": "https://github.com/ollama/ollama/pull/8084",
"diff_url": "https://github.com/ollama/ollama/pull/8084.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8084.patch",
"merged_at": null
}
|
I make a lightweight ollama-webui that allows you:
- to select models and chats etc via the browser,
- and supports importing and exporting of records and such.
- It supports local docker deployments.
|
{
"login": "oslook",
"id": 6346865,
"node_id": "MDQ6VXNlcjYzNDY4NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6346865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oslook",
"html_url": "https://github.com/oslook",
"followers_url": "https://api.github.com/users/oslook/followers",
"following_url": "https://api.github.com/users/oslook/following{/other_user}",
"gists_url": "https://api.github.com/users/oslook/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oslook/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oslook/subscriptions",
"organizations_url": "https://api.github.com/users/oslook/orgs",
"repos_url": "https://api.github.com/users/oslook/repos",
"events_url": "https://api.github.com/users/oslook/events{/privacy}",
"received_events_url": "https://api.github.com/users/oslook/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8084/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/217
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/217/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/217/comments
|
https://api.github.com/repos/ollama/ollama/issues/217/events
|
https://github.com/ollama/ollama/issues/217
| 1,822,597,105
|
I_kwDOJ0Z1Ps5sop_x
| 217
|
Stop generation on specific keyword(s)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-07-26T14:52:27
| 2023-07-28T00:20:58
| 2023-07-28T00:20:57
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Certain models don't automatically stop generation when it's the "user" or "human"'s turn to input data, causing the prompt to be output.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/217/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1737
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1737/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1737/comments
|
https://api.github.com/repos/ollama/ollama/issues/1737/events
|
https://github.com/ollama/ollama/issues/1737
| 2,060,252,148
|
I_kwDOJ0Z1Ps56zPP0
| 1,737
|
Where is ollama storing models?
|
{
"login": "sushiselite",
"id": 109239430,
"node_id": "U_kgDOBoLchg",
"avatar_url": "https://avatars.githubusercontent.com/u/109239430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushiselite",
"html_url": "https://github.com/sushiselite",
"followers_url": "https://api.github.com/users/sushiselite/followers",
"following_url": "https://api.github.com/users/sushiselite/following{/other_user}",
"gists_url": "https://api.github.com/users/sushiselite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushiselite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushiselite/subscriptions",
"organizations_url": "https://api.github.com/users/sushiselite/orgs",
"repos_url": "https://api.github.com/users/sushiselite/repos",
"events_url": "https://api.github.com/users/sushiselite/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushiselite/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 14
| 2023-12-29T16:46:42
| 2024-12-23T08:58:16
| 2024-02-20T01:30:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was under the impression that ollama stores the models locally however, when I run ollama on a different address with
`OLLAMA_HOST=0.0.0.0 ollama serve`, ollama list says I do not have any models installed and I need to pull again.
This issue occurs every time I change the IP/port
I have also performed the steps given in the docs
```
mkdir -p /etc/systemd/system/ollama.service.d
echo '[Service]' >>/etc/systemd/system/ollama.service.d/environment.conf
echo 'Environment="OLLAMA_HOST=0.0.0.0:11434"' >>/etc/systemd/system/ollama.service.d/environment.conf
```
running `ollama serve` in itself still listens only on localhost:11434, where I have models and manually changing it with `OLLAMA_HOST=0.0.0.0 ollama serve`, makes the models disappear
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1737/reactions",
"total_count": 7,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 4
}
|
https://api.github.com/repos/ollama/ollama/issues/1737/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7582
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7582/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7582/comments
|
https://api.github.com/repos/ollama/ollama/issues/7582/events
|
https://github.com/ollama/ollama/issues/7582
| 2,645,534,913
|
I_kwDOJ0Z1Ps6dr6jB
| 7,582
|
Ollama AsyncClinet keep reloading model
|
{
"login": "SnowFox4004",
"id": 101725770,
"node_id": "U_kgDOBhA2Sg",
"avatar_url": "https://avatars.githubusercontent.com/u/101725770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SnowFox4004",
"html_url": "https://github.com/SnowFox4004",
"followers_url": "https://api.github.com/users/SnowFox4004/followers",
"following_url": "https://api.github.com/users/SnowFox4004/following{/other_user}",
"gists_url": "https://api.github.com/users/SnowFox4004/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SnowFox4004/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SnowFox4004/subscriptions",
"organizations_url": "https://api.github.com/users/SnowFox4004/orgs",
"repos_url": "https://api.github.com/users/SnowFox4004/repos",
"events_url": "https://api.github.com/users/SnowFox4004/events{/privacy}",
"received_events_url": "https://api.github.com/users/SnowFox4004/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-11-09T03:19:54
| 2024-11-10T07:58:01
| 2024-11-10T07:58:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
## Problem
Ollama keeps reload the same model while using AsyncClient.chat()

When I `await` the same function again, the model in mem will be unload and reload.like this.

the `keep_alive` parameter doesn't work.
I have changed the `/etc/systemd/system/ollama.service` like this

## Expected
ollama only load the model once and do not reload the model while using.
### OS
Windows, WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6
|
{
"login": "SnowFox4004",
"id": 101725770,
"node_id": "U_kgDOBhA2Sg",
"avatar_url": "https://avatars.githubusercontent.com/u/101725770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SnowFox4004",
"html_url": "https://github.com/SnowFox4004",
"followers_url": "https://api.github.com/users/SnowFox4004/followers",
"following_url": "https://api.github.com/users/SnowFox4004/following{/other_user}",
"gists_url": "https://api.github.com/users/SnowFox4004/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SnowFox4004/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SnowFox4004/subscriptions",
"organizations_url": "https://api.github.com/users/SnowFox4004/orgs",
"repos_url": "https://api.github.com/users/SnowFox4004/repos",
"events_url": "https://api.github.com/users/SnowFox4004/events{/privacy}",
"received_events_url": "https://api.github.com/users/SnowFox4004/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7582/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3060
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3060/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3060/comments
|
https://api.github.com/repos/ollama/ollama/issues/3060/events
|
https://github.com/ollama/ollama/issues/3060
| 2,179,746,736
|
I_kwDOJ0Z1Ps6B7Euw
| 3,060
|
Ollama Server is unavailable after some time
|
{
"login": "vrubzov1957",
"id": 54937209,
"node_id": "MDQ6VXNlcjU0OTM3MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/54937209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vrubzov1957",
"html_url": "https://github.com/vrubzov1957",
"followers_url": "https://api.github.com/users/vrubzov1957/followers",
"following_url": "https://api.github.com/users/vrubzov1957/following{/other_user}",
"gists_url": "https://api.github.com/users/vrubzov1957/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vrubzov1957/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrubzov1957/subscriptions",
"organizations_url": "https://api.github.com/users/vrubzov1957/orgs",
"repos_url": "https://api.github.com/users/vrubzov1957/repos",
"events_url": "https://api.github.com/users/vrubzov1957/events{/privacy}",
"received_events_url": "https://api.github.com/users/vrubzov1957/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-11T17:31:03
| 2024-03-11T19:13:37
| 2024-03-11T18:13:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Sometimes after a certain amount of time working with the same AI model (after about an hour) the Ollama server becomes unavailable.
I have to start it via "ollama run MODEL" - server starts in the background, but closes again after a while.
Very inconvenient when we use a different frontend (like Ollama Web-UI) - we have to connect to the PC manually, and do the Ollama server startup again
LOG
[server_ollama.log](https://github.com/ollama/ollama/files/14562402/server_ollama.log)
In this logs the server became unavailable, then manually started, then became unavailable again. Periods 1-2 hours
OS: Windows
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3060/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4807
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4807/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4807/comments
|
https://api.github.com/repos/ollama/ollama/issues/4807/events
|
https://github.com/ollama/ollama/issues/4807
| 2,332,631,760
|
I_kwDOJ0Z1Ps6LCSLQ
| 4,807
|
'Deepseek-V2' model output mix language
|
{
"login": "gigascake",
"id": 36724511,
"node_id": "MDQ6VXNlcjM2NzI0NTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/36724511?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gigascake",
"html_url": "https://github.com/gigascake",
"followers_url": "https://api.github.com/users/gigascake/followers",
"following_url": "https://api.github.com/users/gigascake/following{/other_user}",
"gists_url": "https://api.github.com/users/gigascake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gigascake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gigascake/subscriptions",
"organizations_url": "https://api.github.com/users/gigascake/orgs",
"repos_url": "https://api.github.com/users/gigascake/repos",
"events_url": "https://api.github.com/users/gigascake/events{/privacy}",
"received_events_url": "https://api.github.com/users/gigascake/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-06-04T06:06:52
| 2024-06-21T00:39:39
| 2024-06-21T00:39:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I used '16b-lite-chat-f16' (31GB) model.
This model inference speed more than before vesion model.
But, Sometimes, output response context in chinese and mix request language.
and then continues output mix language.
this situation. using RAG, document upload, and then Querying, occur just Problem.

Check,
Plz. the solution.
Os : Fedora 39
GPU : nvidia A4000 * 4
CPU : amd threadripper 7980x
Thanks. always.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.40
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4807/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7780
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7780/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7780/comments
|
https://api.github.com/repos/ollama/ollama/issues/7780/events
|
https://github.com/ollama/ollama/issues/7780
| 2,679,645,256
|
I_kwDOJ0Z1Ps6fuCRI
| 7,780
|
Add "loaded" status in Model API
|
{
"login": "explorigin",
"id": 697818,
"node_id": "MDQ6VXNlcjY5NzgxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/697818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/explorigin",
"html_url": "https://github.com/explorigin",
"followers_url": "https://api.github.com/users/explorigin/followers",
"following_url": "https://api.github.com/users/explorigin/following{/other_user}",
"gists_url": "https://api.github.com/users/explorigin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/explorigin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/explorigin/subscriptions",
"organizations_url": "https://api.github.com/users/explorigin/orgs",
"repos_url": "https://api.github.com/users/explorigin/repos",
"events_url": "https://api.github.com/users/explorigin/events{/privacy}",
"received_events_url": "https://api.github.com/users/explorigin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-21T14:19:15
| 2024-11-21T16:07:39
| 2024-11-21T16:07:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Add an indicator to the models list API endpoint as well as the model information endpoint indicating if the model is currently loaded in RAM.
This could allow frontends to direct queries to the loaded model for faster response if the query isn't necessarily model-specific. Other use-cases could be for monitoring services to prefer keeping one model "hot" while other models may be used periodically.
|
{
"login": "explorigin",
"id": 697818,
"node_id": "MDQ6VXNlcjY5NzgxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/697818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/explorigin",
"html_url": "https://github.com/explorigin",
"followers_url": "https://api.github.com/users/explorigin/followers",
"following_url": "https://api.github.com/users/explorigin/following{/other_user}",
"gists_url": "https://api.github.com/users/explorigin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/explorigin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/explorigin/subscriptions",
"organizations_url": "https://api.github.com/users/explorigin/orgs",
"repos_url": "https://api.github.com/users/explorigin/repos",
"events_url": "https://api.github.com/users/explorigin/events{/privacy}",
"received_events_url": "https://api.github.com/users/explorigin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7780/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1065
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1065/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1065/comments
|
https://api.github.com/repos/ollama/ollama/issues/1065/events
|
https://github.com/ollama/ollama/issues/1065
| 1,986,650,047
|
I_kwDOJ0Z1Ps52ad-_
| 1,065
|
Support for openai style functions
|
{
"login": "tionis",
"id": 10359102,
"node_id": "MDQ6VXNlcjEwMzU5MTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/10359102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tionis",
"html_url": "https://github.com/tionis",
"followers_url": "https://api.github.com/users/tionis/followers",
"following_url": "https://api.github.com/users/tionis/following{/other_user}",
"gists_url": "https://api.github.com/users/tionis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tionis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tionis/subscriptions",
"organizations_url": "https://api.github.com/users/tionis/orgs",
"repos_url": "https://api.github.com/users/tionis/repos",
"events_url": "https://api.github.com/users/tionis/events{/privacy}",
"received_events_url": "https://api.github.com/users/tionis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2023-11-10T00:57:30
| 2023-12-08T23:47:49
| 2023-12-04T23:16:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I couldn't find any information if this is considered out of scope or not, but some support for function definitions would be great.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1065/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1049
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1049/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1049/comments
|
https://api.github.com/repos/ollama/ollama/issues/1049/events
|
https://github.com/ollama/ollama/issues/1049
| 1,984,523,041
|
I_kwDOJ0Z1Ps52SWsh
| 1,049
|
Random panics when generating
|
{
"login": "aaronfrancis635",
"id": 30204246,
"node_id": "MDQ6VXNlcjMwMjA0MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/30204246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronfrancis635",
"html_url": "https://github.com/aaronfrancis635",
"followers_url": "https://api.github.com/users/aaronfrancis635/followers",
"following_url": "https://api.github.com/users/aaronfrancis635/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronfrancis635/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronfrancis635/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronfrancis635/subscriptions",
"organizations_url": "https://api.github.com/users/aaronfrancis635/orgs",
"repos_url": "https://api.github.com/users/aaronfrancis635/repos",
"events_url": "https://api.github.com/users/aaronfrancis635/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronfrancis635/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2023-11-08T22:44:43
| 2024-01-12T06:11:56
| 2024-01-12T06:11:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It seems tempremental, sometimes it will generate with no issues, some other times it will panic with the same model and the same prompt.
example body being sent:
```JSON
{
"model": "model_name",
"prompt": "Describe yourself",
"stream": false
}
```
logs below:
```
Nov 08 22:19:03 osmium ollama[2581054]: [GIN] 2023/11/08 - 22:19:03 | 200 | 3.456758865s | 5.***.***.*** | POST "/api/generate"
Nov 08 22:19:03 osmium ollama[2581054]: [GIN] 2023/11/08 - 22:19:03 | 200 | 3.456758865s | 5.***.***.*** | POST "/api/generate"
Nov 08 22:20:26 osmium ollama[3404611]: {"timestamp":1699482026,"level":"INFO","function":"log_server_request","line":1233,"message":"request","remote_addr":"127.0.0.1","remote_port":34870,"status":200,"method":"HEAD","path":"/","params":{}}
Nov 08 22:20:36 osmium ollama[2581054]: 2023/11/08 22:20:36 [Recovery] 2023/11/08 - 22:20:36 panic recovered:
Nov 08 22:20:36 osmium ollama[2581054]: runtime error: invalid memory address or nil pointer dereference
Nov 08 22:20:36 osmium ollama[2581054]: /usr/local/go/src/runtime/panic.go:261 (0x451137)
Nov 08 22:20:36 osmium ollama[2581054]: /usr/local/go/src/runtime/signal_unix.go:861 (0x451105)
Nov 08 22:20:36 osmium ollama[2581054]: /go/src/github.com/jmorganca/ollama/server/routes.go:234 (0x98f535)
Nov 08 22:20:36 osmium ollama[2581054]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x995ac4)
Nov 08 22:20:36 osmium ollama[2581054]: /go/src/github.com/jmorganca/ollama/server/routes.go:659 (0x995ab2)
Nov 08 22:20:36 osmium ollama[2581054]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x972db9)
Nov 08 22:20:36 osmium ollama[2581054]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 (0x972da7)
Nov 08 22:20:36 osmium ollama[2581054]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x971f5d)
Nov 08 22:20:36 osmium ollama[2581054]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 (0x971f2c)
Nov 08 22:20:36 osmium ollama[2581054]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x97101a)
Nov 08 22:20:36 osmium ollama[2581054]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 (0x970cad)
Nov 08 22:20:36 osmium ollama[2581054]: /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 (0x9707dc)
Nov 08 22:20:36 osmium ollama[2581054]: /usr/local/go/src/net/http/server.go:2938 (0x6d326d)
Nov 08 22:20:36 osmium ollama[2581054]: /usr/local/go/src/net/http/server.go:2009 (0x6cf153)
Nov 08 22:20:36 osmium ollama[2581054]: /usr/local/go/src/runtime/asm_amd64.s:1650 (0x46d680)
Nov 08 22:20:36 osmium ollama[2581054]:
Nov 08 22:20:36 osmium ollama[2581054]: [GIN] 2023/11/08 - 22:20:36 | 500 | 10.005014413s | 5.***.***.*** | POST "/api/generate"
Nov 08 22:20:36 osmium ollama[3404611]: {"timestamp":1699482036,"level":"INFO","function":"log_server_request","line":1233,"message":"request","remote_addr":"127.0.0.1","remote_port":34870,"status":200,"method":"POST","path":"/completion","params":{}}
Nov 08 22:20:36 osmium ollama[2581054]: llama_print_timings: load time = 1268.89 ms
Nov 08 22:20:36 osmium ollama[2581054]: llama_print_timings: sample time = 150.00 ms / 461 runs ( 0.33 ms per token, 3073.25 tokens per second)
Nov 08 22:20:36 osmium ollama[2581054]: llama_print_timings: prompt eval time = 184.63 ms / 18 tokens ( 10.26 ms per token, 97.49 tokens per second)
Nov 08 22:20:36 osmium ollama[2581054]: llama_print_timings: eval time = 9604.34 ms / 460 runs ( 20.88 ms per token, 47.90 tokens per second)
```
Any ideas are appreciated.
Thanks.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1049/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7182
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7182/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7182/comments
|
https://api.github.com/repos/ollama/ollama/issues/7182/events
|
https://github.com/ollama/ollama/issues/7182
| 2,582,729,263
|
I_kwDOJ0Z1Ps6Z8VIv
| 7,182
|
Any plans to add Pyramid-Flow?
|
{
"login": "Swiffers",
"id": 1623148,
"node_id": "MDQ6VXNlcjE2MjMxNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1623148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Swiffers",
"html_url": "https://github.com/Swiffers",
"followers_url": "https://api.github.com/users/Swiffers/followers",
"following_url": "https://api.github.com/users/Swiffers/following{/other_user}",
"gists_url": "https://api.github.com/users/Swiffers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Swiffers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Swiffers/subscriptions",
"organizations_url": "https://api.github.com/users/Swiffers/orgs",
"repos_url": "https://api.github.com/users/Swiffers/repos",
"events_url": "https://api.github.com/users/Swiffers/events{/privacy}",
"received_events_url": "https://api.github.com/users/Swiffers/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 1
| 2024-10-12T09:38:10
| 2024-10-12T09:47:54
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
hi,
Do you plan to add https://github.com/jy0205/Pyramid-Flow ?
Thanks
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7182/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2112
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2112/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2112/comments
|
https://api.github.com/repos/ollama/ollama/issues/2112/events
|
https://github.com/ollama/ollama/pull/2112
| 2,092,202,297
|
PR_kwDOJ0Z1Ps5kog-k
| 2,112
|
Add support for CUDA 5.2 cards
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-20T18:51:18
| 2024-01-27T15:14:35
| 2024-01-27T15:14:30
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2112",
"html_url": "https://github.com/ollama/ollama/pull/2112",
"diff_url": "https://github.com/ollama/ollama/pull/2112.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2112.patch",
"merged_at": null
}
|
This doesn't seem to have a noticeable negative impact to performance on 6.0+ cards, but I'll keep in draft until we can test across more variations of models and cards.
Partially fixes #1865 (5.0 still unsupported with this change)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2112/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2112/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/583
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/583/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/583/comments
|
https://api.github.com/repos/ollama/ollama/issues/583/events
|
https://github.com/ollama/ollama/issues/583
| 1,910,049,629
|
I_kwDOJ0Z1Ps5x2Qtd
| 583
|
Windows Install
|
{
"login": "tracybannon",
"id": 79816433,
"node_id": "MDQ6VXNlcjc5ODE2NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/79816433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tracybannon",
"html_url": "https://github.com/tracybannon",
"followers_url": "https://api.github.com/users/tracybannon/followers",
"following_url": "https://api.github.com/users/tracybannon/following{/other_user}",
"gists_url": "https://api.github.com/users/tracybannon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tracybannon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tracybannon/subscriptions",
"organizations_url": "https://api.github.com/users/tracybannon/orgs",
"repos_url": "https://api.github.com/users/tracybannon/repos",
"events_url": "https://api.github.com/users/tracybannon/events{/privacy}",
"received_events_url": "https://api.github.com/users/tracybannon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-24T00:53:44
| 2023-09-28T20:28:16
| 2023-09-28T20:28:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Checking in to see when the Windows install will be ready. I am chomping at the bit!
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/583/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8301
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8301/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8301/comments
|
https://api.github.com/repos/ollama/ollama/issues/8301/events
|
https://github.com/ollama/ollama/pull/8301
| 2,768,410,051
|
PR_kwDOJ0Z1Ps6GtM_x
| 8,301
|
Runner for OIlama engine
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-04T04:18:22
| 2025-01-30T01:41:53
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8301",
"html_url": "https://github.com/ollama/ollama/pull/8301",
"diff_url": "https://github.com/ollama/ollama/pull/8301.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8301.patch",
"merged_at": null
}
|
This is very much a work in progress - the new code in this PR is still quite messy.
Instructions (works best on Metal):
Start the server with the OLLAMA_NEW_RUNNERS environment variable set. At the moment, only the Ollama engine is compiled in, so you must set this variable and can only run models supported by the new engine.
`OLLAMA_NEW_RUNNERS=1 ./ollama serve`
Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
`./ollama run jessegross/llama`
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8301/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8301/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.