url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/7606
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7606/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7606/comments
|
https://api.github.com/repos/ollama/ollama/issues/7606/events
|
https://github.com/ollama/ollama/issues/7606
| 2,647,954,349
|
I_kwDOJ0Z1Ps6d1JOt
| 7,606
|
vram usage does not go back down after model unloads
|
{
"login": "CraftMaster163",
"id": 69362326,
"node_id": "MDQ6VXNlcjY5MzYyMzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/69362326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CraftMaster163",
"html_url": "https://github.com/CraftMaster163",
"followers_url": "https://api.github.com/users/CraftMaster163/followers",
"following_url": "https://api.github.com/users/CraftMaster163/following{/other_user}",
"gists_url": "https://api.github.com/users/CraftMaster163/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CraftMaster163/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CraftMaster163/subscriptions",
"organizations_url": "https://api.github.com/users/CraftMaster163/orgs",
"repos_url": "https://api.github.com/users/CraftMaster163/repos",
"events_url": "https://api.github.com/users/CraftMaster163/events{/privacy}",
"received_events_url": "https://api.github.com/users/CraftMaster163/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 18
| 2024-11-11T02:42:55
| 2024-11-13T22:00:31
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when i set keep alive to 0 the memory usage does not go all the way back down. also it uses system ram when vram still avalible
gpu 7800xt
platform windows
rocm version 6.1
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.4.1
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7606/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5321
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5321/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5321/comments
|
https://api.github.com/repos/ollama/ollama/issues/5321/events
|
https://github.com/ollama/ollama/issues/5321
| 2,377,767,781
|
I_kwDOJ0Z1Ps6Nudtl
| 5,321
|
Llama3: Generated outputs inconsistent despite seed and temperature
|
{
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-kleine/followers",
"following_url": "https://api.github.com/users/d-kleine/following{/other_user}",
"gists_url": "https://api.github.com/users/d-kleine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-kleine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-kleine/subscriptions",
"organizations_url": "https://api.github.com/users/d-kleine/orgs",
"repos_url": "https://api.github.com/users/d-kleine/repos",
"events_url": "https://api.github.com/users/d-kleine/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-kleine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 4
| 2024-06-27T10:24:21
| 2025-01-02T16:38:58
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Follow-up of #586
Even though the output is **deterministic** and **reproducible** with a fixed `seed`, a `temperature` set to 0 and a fixed `num_ctx`, the generated output of **Llama 3** slightly differs in the first executing of this code and the second execution of this code (without kernel restart). The following executions will be the same as for the second execution:
Code snippet taken from [LLMs from scratch - Evaluation with Ollama](https://github.com/rasbt/LLMs-from-scratch/blob/1db199995121afc56146f92ec502b68df17e9c0a/ch07/03_model-evaluation/llm-instruction-eval-ollama.ipynb):
```python
import urllib.request
import json
def query_model(prompt, model="llama3", url="http://localhost:11434/api/chat"):
# Create the data payload as a dictionary
data = {
"model": model,
"messages": [
{
"role": "user",
"content": prompt
}
],
"options": {
"seed": 123,
"temperature": 0,
"num_ctx": 2048 # must be set, otherwise slightly random output
}
}
# Convert the dictionary to a JSON formatted string and encode it to bytes
payload = json.dumps(data).encode("utf-8")
# Create a request object, setting the method to POST and adding necessary headers
request = urllib.request.Request(url, data=payload, method="POST")
request.add_header("Content-Type", "application/json")
# Send the request and capture the response
response_data = ""
with urllib.request.urlopen(request) as response:
# Read and decode the response
while True:
line = response.readline().decode("utf-8")
if not line:
break
response_json = json.loads(line)
response_data += response_json["message"]["content"]
return response_data
result = query_model("What do Llamas eat?")
print(result)
```
Output of execution no. $1$ (output can vary):
```
Llamas are herbivores, which means they primarily feed on plant-based foods. Their diet typically consists of:
1. Grasses: Llamas love to graze on various types of grasses, including tall grasses, short grasses, and even weeds.
2. Hay: High-quality hay, such as alfalfa or timothy hay, is a staple in a llama's diet. They enjoy munching on hay as a snack or as a main meal.
3. Grains: Llamas may be fed grains like oats, barley, or corn as an occasional treat or to supplement their diet.
4. Fruits and vegetables: Fresh fruits and veggies, such as apples, carrots, and sweet potatoes, can be given as treats or added to their meals for variety.
5. Leaves and shrubs: Llamas will also eat leaves from trees and shrubs, like willow or cedar.
In the wild, llamas might eat:
* Various grasses and plants
* Leaves from trees and shrubs
* Fruits and berries
* Bark (in some cases)
Domesticated llamas, on the other hand, typically receive a diet that includes:
* Hay as their main staple
* Grains or pellets as a supplement
* Fresh fruits and veggies as treats
It's essential to provide llamas with a balanced diet that meets their nutritional needs. Consult with a veterinarian or an experienced llama breeder to determine the best feeding plan for your llama.
```
Output for execution no. $2$ to execution no. $n$ (output should be reproducible):
```
Llamas are herbivores, which means they primarily feed on plant-based foods. Their diet typically consists of:
1. Grasses: Llamas love to graze on various types of grasses, including tall grasses, short grasses, and even weeds.
2. Hay: High-quality hay, such as alfalfa or timothy hay, is a staple in a llama's diet. They enjoy munching on hay cubes or loose hay.
3. Grains: Llamas may receive grains like oats, barley, or corn as part of their diet. However, these should be given in moderation to avoid digestive issues.
4. Fruits and vegetables: Fresh fruits and veggies can be a tasty treat for llamas. Some favorites include apples, carrots, sweet potatoes, and leafy greens like kale or spinach.
5. Minerals: Llamas need access to mineral supplements, such as salt licks or loose minerals, to ensure they're getting the necessary nutrients.
In the wild, llamas might also eat:
1. Leaves: They'll munch on leaves from trees and shrubs, like willow or cedar.
2. Bark: In some cases, llamas may eat the bark of certain trees, like aspen or birch.
3. Mosses: Llamas have been known to graze on mosses and other non-woody plant material.
It's essential to provide a balanced diet for your llama, taking into account their age, size, and individual needs. Consult with a veterinarian or experienced llama breeder to determine the best feeding plan for your llama.
```
**Observations:**
- As you can see from the outputs, the output from the first execution will be random, whereas the output of the second execution and all subsequent executions will be generated consistently deterministic.
- I have tried using different platforms (Windows, Docker using Ubuntu image), and it seems like the generated outputs differ across those different OS: The first one will always be somewhat random, but the following ones are consistent on a platform. But for example on Windows, this code produced a different consistent deterministic output than Ubuntu did.
- I have tried to set a Python hashseed, this did not solve the issue.
Linux, macOS, Windows, Docker, WSL2
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.46
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5321/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5321/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8212
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8212/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8212/comments
|
https://api.github.com/repos/ollama/ollama/issues/8212/events
|
https://github.com/ollama/ollama/issues/8212
| 2,754,827,792
|
I_kwDOJ0Z1Ps6kM1YQ
| 8,212
|
Add "/v1/images/generations" endpoints for compatiblity in order to leverage vision models `
|
{
"login": "Routhinator",
"id": 727535,
"node_id": "MDQ6VXNlcjcyNzUzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/727535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Routhinator",
"html_url": "https://github.com/Routhinator",
"followers_url": "https://api.github.com/users/Routhinator/followers",
"following_url": "https://api.github.com/users/Routhinator/following{/other_user}",
"gists_url": "https://api.github.com/users/Routhinator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Routhinator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Routhinator/subscriptions",
"organizations_url": "https://api.github.com/users/Routhinator/orgs",
"repos_url": "https://api.github.com/users/Routhinator/repos",
"events_url": "https://api.github.com/users/Routhinator/events{/privacy}",
"received_events_url": "https://api.github.com/users/Routhinator/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-22T20:21:38
| 2024-12-22T22:42:35
| 2024-12-22T22:42:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was playing with getting Nextcloud Assistant to try and use the `llava` model from the Ollama library, and realized that since Nextcloud's Asstant integration works through the LocalAI api syntax, it expects the vision models to respond from `/v1/images/generations` - which is an endpoint Ollama currently does not expose.
It would be nice to have this for integration compatibility.
|
{
"login": "Routhinator",
"id": 727535,
"node_id": "MDQ6VXNlcjcyNzUzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/727535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Routhinator",
"html_url": "https://github.com/Routhinator",
"followers_url": "https://api.github.com/users/Routhinator/followers",
"following_url": "https://api.github.com/users/Routhinator/following{/other_user}",
"gists_url": "https://api.github.com/users/Routhinator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Routhinator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Routhinator/subscriptions",
"organizations_url": "https://api.github.com/users/Routhinator/orgs",
"repos_url": "https://api.github.com/users/Routhinator/repos",
"events_url": "https://api.github.com/users/Routhinator/events{/privacy}",
"received_events_url": "https://api.github.com/users/Routhinator/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8212/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8212/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2863
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2863/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2863/comments
|
https://api.github.com/repos/ollama/ollama/issues/2863/events
|
https://github.com/ollama/ollama/issues/2863
| 2,163,492,479
|
I_kwDOJ0Z1Ps6A9EZ_
| 2,863
|
Users and users management commands
|
{
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/users/trymeouteh/followers",
"following_url": "https://api.github.com/users/trymeouteh/following{/other_user}",
"gists_url": "https://api.github.com/users/trymeouteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trymeouteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trymeouteh/subscriptions",
"organizations_url": "https://api.github.com/users/trymeouteh/orgs",
"repos_url": "https://api.github.com/users/trymeouteh/repos",
"events_url": "https://api.github.com/users/trymeouteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/trymeouteh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-03-01T14:07:11
| 2024-03-12T00:25:32
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If one wants to host one or more model on a beefy computer and give access to a selected few but not the entire world, I would like to suggest some sort of user feature.
**Host**
The host device will have to install Ollama, install the models and port forward. The host device can create users, delete users, list the users, etc. The uses could simply be API Keys and each API key will have a name attached to them to know the user for this API key. Or the users could be a username and password. New commands in host such as `user create`, `user rename`, `user delete` will need to be added.
**Remote**
To access Ollama remotely, you need to use an API key or login with a username and password. Ollama CLI will allow you to add and remove models from remote devices. New commands such as `remote add model`, `remote remove model`, `list remote models`, `remote login`, `remote logout`, `remove list logins` will need to be added
To also allow each user to have a ranking in permissions. There will be the admin user which will have all permissions. Then moderators which will have most permissions and then users that have the least permissions and can only use the service but cannot modify the service.
This feature will also allow for more usecases of Ollama to be used on mobile devices. By having the models run on a remote device with the hardware specs to run the models, the mobile devices does not need o have the models installed on the device and with mobile device hardware being less powerful than desktop hardware, a user can setup a ollama host and use ollama on their phone over the internet.
I would like to also suggest good encryption with the connection between the remote device and host device.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2863/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/467
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/467/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/467/comments
|
https://api.github.com/repos/ollama/ollama/issues/467/events
|
https://github.com/ollama/ollama/issues/467
| 1,879,267,746
|
I_kwDOJ0Z1Ps5wA1mi
| 467
|
Running a 70B Model with 16GB RAM: Possible Strategies?
|
{
"login": "OguzcanOzdemir",
"id": 24637523,
"node_id": "MDQ6VXNlcjI0NjM3NTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/24637523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OguzcanOzdemir",
"html_url": "https://github.com/OguzcanOzdemir",
"followers_url": "https://api.github.com/users/OguzcanOzdemir/followers",
"following_url": "https://api.github.com/users/OguzcanOzdemir/following{/other_user}",
"gists_url": "https://api.github.com/users/OguzcanOzdemir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OguzcanOzdemir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OguzcanOzdemir/subscriptions",
"organizations_url": "https://api.github.com/users/OguzcanOzdemir/orgs",
"repos_url": "https://api.github.com/users/OguzcanOzdemir/repos",
"events_url": "https://api.github.com/users/OguzcanOzdemir/events{/privacy}",
"received_events_url": "https://api.github.com/users/OguzcanOzdemir/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 4
| 2023-09-03T23:19:22
| 2023-09-05T19:47:44
| 2023-09-05T16:05:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I'm currently working with a system that has 16GB of RAM, and I'm interested in running a 70B model for my project. I understand that according to the GitHub repository's documentation, a 70B model typically requires 32GB of RAM.
However, due to my system limitations, I'm looking for guidance on potential strategies or alternative methods to run a 70B model efficiently with 16GB of RAM.
Are there any techniques, optimizations, or workarounds that I can explore to make this possible? I would greatly appreciate any advice or suggestions on how to approach this challenge and still achieve acceptable performance.
Thank you for your assistance and insights.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/467/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3929
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3929/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3929/comments
|
https://api.github.com/repos/ollama/ollama/issues/3929/events
|
https://github.com/ollama/ollama/issues/3929
| 2,264,918,755
|
I_kwDOJ0Z1Ps6G_-rj
| 3,929
|
Can you please add llava-phi-3-mini by xtuner?
|
{
"login": "yashasnadigsyn",
"id": 103478177,
"node_id": "U_kgDOBirzoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/103478177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yashasnadigsyn",
"html_url": "https://github.com/yashasnadigsyn",
"followers_url": "https://api.github.com/users/yashasnadigsyn/followers",
"following_url": "https://api.github.com/users/yashasnadigsyn/following{/other_user}",
"gists_url": "https://api.github.com/users/yashasnadigsyn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yashasnadigsyn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yashasnadigsyn/subscriptions",
"organizations_url": "https://api.github.com/users/yashasnadigsyn/orgs",
"repos_url": "https://api.github.com/users/yashasnadigsyn/repos",
"events_url": "https://api.github.com/users/yashasnadigsyn/events{/privacy}",
"received_events_url": "https://api.github.com/users/yashasnadigsyn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-04-26T04:08:36
| 2024-04-27T02:20:14
| 2024-04-27T02:20:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Here is the model gguf link: https://huggingface.co/xtuner/llava-phi-3-mini-gguf
Here is the model hf link: https://huggingface.co/xtuner/llava-phi-3-mini-hf
I have been trying to add it manually by modelfile but i can't seem to understand the template. I tried using llava template, bakllava template, other multimodal templates but the model confuses.
Can anyone help me?
|
{
"login": "yashasnadigsyn",
"id": 103478177,
"node_id": "U_kgDOBirzoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/103478177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yashasnadigsyn",
"html_url": "https://github.com/yashasnadigsyn",
"followers_url": "https://api.github.com/users/yashasnadigsyn/followers",
"following_url": "https://api.github.com/users/yashasnadigsyn/following{/other_user}",
"gists_url": "https://api.github.com/users/yashasnadigsyn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yashasnadigsyn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yashasnadigsyn/subscriptions",
"organizations_url": "https://api.github.com/users/yashasnadigsyn/orgs",
"repos_url": "https://api.github.com/users/yashasnadigsyn/repos",
"events_url": "https://api.github.com/users/yashasnadigsyn/events{/privacy}",
"received_events_url": "https://api.github.com/users/yashasnadigsyn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3929/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/3929/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3719
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3719/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3719/comments
|
https://api.github.com/repos/ollama/ollama/issues/3719/events
|
https://github.com/ollama/ollama/issues/3719
| 2,249,590,632
|
I_kwDOJ0Z1Ps6GFgdo
| 3,719
|
How do I download an AI model to external storage and run it?
|
{
"login": "manfar",
"id": 13696009,
"node_id": "MDQ6VXNlcjEzNjk2MDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13696009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manfar",
"html_url": "https://github.com/manfar",
"followers_url": "https://api.github.com/users/manfar/followers",
"following_url": "https://api.github.com/users/manfar/following{/other_user}",
"gists_url": "https://api.github.com/users/manfar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manfar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manfar/subscriptions",
"organizations_url": "https://api.github.com/users/manfar/orgs",
"repos_url": "https://api.github.com/users/manfar/repos",
"events_url": "https://api.github.com/users/manfar/events{/privacy}",
"received_events_url": "https://api.github.com/users/manfar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-04-18T02:12:52
| 2025-01-07T13:11:12
| 2024-05-05T00:20:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
For existing Mac computers with insufficient hard disk space, how to download the model to an external SSD drive for running instead of storing it on the computer itself. This way you can install more models and run them faster. It also supports path searching and finder viewing of the download storage.
And how do I find the path to the models I have downloaded to my Mac computer, I can't find where the models I have downloaded are.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3719/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4674
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4674/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4674/comments
|
https://api.github.com/repos/ollama/ollama/issues/4674/events
|
https://github.com/ollama/ollama/issues/4674
| 2,320,283,650
|
I_kwDOJ0Z1Ps6KTLgC
| 4,674
|
any command but serve get errors,when using proxy
|
{
"login": "lingfengchencn",
"id": 2757011,
"node_id": "MDQ6VXNlcjI3NTcwMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2757011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lingfengchencn",
"html_url": "https://github.com/lingfengchencn",
"followers_url": "https://api.github.com/users/lingfengchencn/followers",
"following_url": "https://api.github.com/users/lingfengchencn/following{/other_user}",
"gists_url": "https://api.github.com/users/lingfengchencn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lingfengchencn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lingfengchencn/subscriptions",
"organizations_url": "https://api.github.com/users/lingfengchencn/orgs",
"repos_url": "https://api.github.com/users/lingfengchencn/repos",
"events_url": "https://api.github.com/users/lingfengchencn/events{/privacy}",
"received_events_url": "https://api.github.com/users/lingfengchencn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-28T06:59:22
| 2024-06-18T16:51:08
| 2024-06-18T16:51:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when I configure http_proxy/ https_proxy / HTTP_PROXY / HTTPS_PROXY in docker-compose, `ollama serve` runs well, but other commands get errors.
here is my yaml
```yaml
version: '3.8'
name: "dev-ollama"
services:
ollama:
image: ollama/ollama
environment:
- OLLAMA_DEBUG=1
- http_proxy=http://host.docker.internal:7890
- https_proxy=http://host.docker.internal:7890
- HTTP_PROXY=http://host.docker.internal:7890
- HTTPS_PROXY=http://host.docker.internal:7890
- NO_PROXY=localhost,127.0.0.1,.aliyun.com
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./ollama:/root/.ollama
ports:
- "11434:11434"
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
restart: unless-stopped
```
after starting success, I run these commands:
```
root@4699c7ce936d:/# ollama run
Error: requires at least 1 arg(s), only received 0
root@4699c7ce936d:/# ollama run llama3
Error: something went wrong, please see the Ollama server logs for details
root@4699c7ce936d:/# ollama ps
Error: something went wrong, please see the Ollama server logs for details
```
and there are no logs...except started logs
```
(base) [root@ ... docker]# docker compose up ollama
[+] Building 0.0s (0/0)
[+] Running 2/2
✔ Volume "dev-ollama_ollama" Created 0.0s
✔ Container dev-ollama-ollama-1 Recreated 0.1s
Attaching to dev-ollama-ollama-1
dev-ollama-ollama-1 | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
dev-ollama-ollama-1 | Your new public key is:
dev-ollama-ollama-1 |
dev-ollama-ollama-1 | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHmaya1CaoLoxW9yezdS1bkOx5lxQr9/8qyvxk0RzmSd
dev-ollama-ollama-1 |
dev-ollama-ollama-1 | 2024/05/28 06:40:53 routes.go:1008: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.928Z level=INFO source=images.go:704 msg="total blobs: 0"
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.928Z level=INFO source=images.go:711 msg="total unused blobs removed: 0"
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=INFO source=routes.go:1054 msg="Listening on [::]:11434 (version 0.1.38)"
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama980004349/runners
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=DEBUG source=payload.go:180 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=DEBUG source=payload.go:180 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=DEBUG source=payload.go:180 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=DEBUG source=payload.go:180 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/deps.txt.gz
dev-ollama-ollama-1 | time=2024-05-28T06:40:53.929Z level=DEBUG source=payload.go:180 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/ollama_llama_server.gz
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama980004349/runners/cpu
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama980004349/runners/cpu_avx
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama980004349/runners/cpu_avx2
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama980004349/runners/cuda_v11
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama980004349/runners/rocm_v60002
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=DEBUG source=sched.go:90 msg="starting llm scheduler"
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=DEBUG source=gpu.go:122 msg="Detecting GPUs"
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=DEBUG source=gpu.go:261 msg="Searching for GPU library" name=libcuda.so*
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.730Z level=DEBUG source=gpu.go:280 msg="gpu library search" globs="[/usr/local/nvidia/lib/libcuda.so** /usr/local/nvidia/lib64/libcuda.so** /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.731Z level=DEBUG source=gpu.go:313 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.525.116.04]
dev-ollama-ollama-1 | CUDA driver version: 12.0
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.739Z level=DEBUG source=gpu.go:127 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.525.116.04
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.739Z level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2"
dev-ollama-ollama-1 | [GPU-ac0f5b80-9816-9909-12f0-23878ea93215] CUDA totalMem 32500 mb
dev-ollama-ollama-1 | [GPU-ac0f5b80-9816-9909-12f0-23878ea93215] CUDA freeMem 10968 mb
dev-ollama-ollama-1 | [GPU-ac0f5b80-9816-9909-12f0-23878ea93215] Compute Capability 7.0
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.854Z level=DEBUG source=amd_linux.go:322 msg="amdgpu driver not detected /sys/module/amdgpu"
dev-ollama-ollama-1 | releasing nvcuda library
dev-ollama-ollama-1 | time=2024-05-28T06:40:57.854Z level=INFO source=types.go:71 msg="inference compute" id=GPU-ac0f5b80-9816-9909-12f0-23878ea93215 library=cuda compute=7.0 driver=12.0 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="10.7 GiB"
^CGracefully stopping... (press Ctrl+C again to force)
Aborting on container exit...
[+] Stopping 1/1
✔ Container dev-ollama-ollama-1 Stopped
```
BUT if i removed http_proxy ,it works fine.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.38
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4674/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6548
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6548/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6548/comments
|
https://api.github.com/repos/ollama/ollama/issues/6548/events
|
https://github.com/ollama/ollama/pull/6548
| 2,493,192,684
|
PR_kwDOJ0Z1Ps55xFfu
| 6,548
|
update the openai docs to explain how to set the context size
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-29T00:09:37
| 2024-08-29T00:11:48
| 2024-08-29T00:11:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6548",
"html_url": "https://github.com/ollama/ollama/pull/6548",
"diff_url": "https://github.com/ollama/ollama/pull/6548.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6548.patch",
"merged_at": "2024-08-29T00:11:46"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6548/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3033
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3033/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3033/comments
|
https://api.github.com/repos/ollama/ollama/issues/3033/events
|
https://github.com/ollama/ollama/pull/3033
| 2,177,513,695
|
PR_kwDOJ0Z1Ps5pKMEJ
| 3,033
|
docs: Add AI telegram to Community Integrations.
|
{
"login": "tusharhero",
"id": 54012021,
"node_id": "MDQ6VXNlcjU0MDEyMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/54012021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tusharhero",
"html_url": "https://github.com/tusharhero",
"followers_url": "https://api.github.com/users/tusharhero/followers",
"following_url": "https://api.github.com/users/tusharhero/following{/other_user}",
"gists_url": "https://api.github.com/users/tusharhero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tusharhero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tusharhero/subscriptions",
"organizations_url": "https://api.github.com/users/tusharhero/orgs",
"repos_url": "https://api.github.com/users/tusharhero/repos",
"events_url": "https://api.github.com/users/tusharhero/events{/privacy}",
"received_events_url": "https://api.github.com/users/tusharhero/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-10T04:08:51
| 2024-03-25T18:56:42
| 2024-03-25T18:56:42
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3033",
"html_url": "https://github.com/ollama/ollama/pull/3033",
"diff_url": "https://github.com/ollama/ollama/pull/3033.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3033.patch",
"merged_at": "2024-03-25T18:56:42"
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3033/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3066
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3066/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3066/comments
|
https://api.github.com/repos/ollama/ollama/issues/3066/events
|
https://github.com/ollama/ollama/issues/3066
| 2,180,244,723
|
I_kwDOJ0Z1Ps6B8-Tz
| 3,066
|
CLBlast for intergrated gpu support
|
{
"login": "joshuachris2001",
"id": 54247518,
"node_id": "MDQ6VXNlcjU0MjQ3NTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/54247518?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshuachris2001",
"html_url": "https://github.com/joshuachris2001",
"followers_url": "https://api.github.com/users/joshuachris2001/followers",
"following_url": "https://api.github.com/users/joshuachris2001/following{/other_user}",
"gists_url": "https://api.github.com/users/joshuachris2001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshuachris2001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshuachris2001/subscriptions",
"organizations_url": "https://api.github.com/users/joshuachris2001/orgs",
"repos_url": "https://api.github.com/users/joshuachris2001/repos",
"events_url": "https://api.github.com/users/joshuachris2001/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshuachris2001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 1
| 2024-03-11T21:11:56
| 2024-03-11T22:26:48
| 2024-03-11T22:26:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
is there support to compile ollama with CLBlast for a device with a integrated non AMD GPU?
I've tried compiling with: CLBlast_DIR=/usr/lib/cmake/CLBlast go generate -tags clbast ./...
yet I still get "no GPU detected"
the I-GPU I'm trying to get CLBlast to work on is a `Intel HD Graphics 5500` when llama is explicitly compiled for the slight boost in speed is still helpful especially with CLIP.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3066/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8559
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8559/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8559/comments
|
https://api.github.com/repos/ollama/ollama/issues/8559/events
|
https://github.com/ollama/ollama/issues/8559
| 2,808,921,360
|
I_kwDOJ0Z1Ps6nbL0Q
| 8,559
|
Model list cleared after starting as a service using nssm
|
{
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.github.com/users/AncientMystic/followers",
"following_url": "https://api.github.com/users/AncientMystic/following{/other_user}",
"gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions",
"organizations_url": "https://api.github.com/users/AncientMystic/orgs",
"repos_url": "https://api.github.com/users/AncientMystic/repos",
"events_url": "https://api.github.com/users/AncientMystic/events{/privacy}",
"received_events_url": "https://api.github.com/users/AncientMystic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2025-01-24T09:08:23
| 2025-01-24T09:21:40
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have started Ollama as a service in windows hoping that how windows handles services will allow for a slight improvement in performance, but after starting ollama in this way the model list is gone.
Both starting as a service and through the app now have the same result, no models.
All the model files are still present in the model folder and the folder is defined manually with the ollama_models environment variable.
Is there any way to restore or rebuild the list? I have around 450gb of models installed, so i would really rather not have to start all over.... many of them are also from huggingface so it would be a long process of finding a re-downloadling many of them
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8559/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7368
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7368/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7368/comments
|
https://api.github.com/repos/ollama/ollama/issues/7368/events
|
https://github.com/ollama/ollama/pull/7368
| 2,615,297,617
|
PR_kwDOJ0Z1Ps5_9ssN
| 7,368
|
runner.go: Use stable llama.cpp sampling interface
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-10-25T22:45:47
| 2024-11-21T19:35:29
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7368",
"html_url": "https://github.com/ollama/ollama/pull/7368",
"diff_url": "https://github.com/ollama/ollama/pull/7368.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7368.patch",
"merged_at": null
}
|
Currently for sampling we are using an internal interface for the llama.cpp examples, which tends to change from release to release. This is the only such interface used for text models, though llava and clip are also used for image processing.
This switches to use the stable interfaces, reducing the amount of work needed for future llama.cpp bumps. It also significantly reduces the amount of code that we need to vendor (much of it is unused but is a dependency).
The sampling logic is the same as it is now for the parameters that we support and is done at the CGo layer. However, in the future if there are benefits to reconfiguring it then we can expose the primatives to native Go code.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7368/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/135
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/135/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/135/comments
|
https://api.github.com/repos/ollama/ollama/issues/135/events
|
https://github.com/ollama/ollama/pull/135
| 1,813,277,989
|
PR_kwDOJ0Z1Ps5V9q9k
| 135
|
ctrl+c on empty line exits
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-20T06:40:59
| 2023-07-20T16:20:37
| 2023-07-20T07:53:08
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/135",
"html_url": "https://github.com/ollama/ollama/pull/135",
"diff_url": "https://github.com/ollama/ollama/pull/135.diff",
"patch_url": "https://github.com/ollama/ollama/pull/135.patch",
"merged_at": "2023-07-20T07:53:08"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/135/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1364
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1364/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1364/comments
|
https://api.github.com/repos/ollama/ollama/issues/1364/events
|
https://github.com/ollama/ollama/pull/1364
| 2,022,630,377
|
PR_kwDOJ0Z1Ps5g_T7J
| 1,364
|
Ollama Telegram Bot
|
{
"login": "ruecat",
"id": 79139779,
"node_id": "MDQ6VXNlcjc5MTM5Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/79139779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruecat",
"html_url": "https://github.com/ruecat",
"followers_url": "https://api.github.com/users/ruecat/followers",
"following_url": "https://api.github.com/users/ruecat/following{/other_user}",
"gists_url": "https://api.github.com/users/ruecat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruecat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruecat/subscriptions",
"organizations_url": "https://api.github.com/users/ruecat/orgs",
"repos_url": "https://api.github.com/users/ruecat/repos",
"events_url": "https://api.github.com/users/ruecat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruecat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-12-03T17:16:56
| 2023-12-03T19:19:55
| 2023-12-03T19:19:55
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1364",
"html_url": "https://github.com/ollama/ollama/pull/1364",
"diff_url": "https://github.com/ollama/ollama/pull/1364.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1364.patch",
"merged_at": "2023-12-03T19:19:55"
}
|
This pull request adds [telegram-ollama](https://github.com/ruecat/ollama-telegram) to [Extensions & Plugins](https://github.com/jmorganca/ollama/commit/41f73433bbf607160f2356388463de42714f2d23) section
I created a bot for telegram, it uses aiogram and can stream API requests in one message, without ratelimit.
Soon it will get Docker support and other features.
Thanks!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1364/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6039
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6039/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6039/comments
|
https://api.github.com/repos/ollama/ollama/issues/6039/events
|
https://github.com/ollama/ollama/pull/6039
| 2,434,633,290
|
PR_kwDOJ0Z1Ps52tC4o
| 6,039
|
update llama.cpp submodule to `6eeaeba1`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-07-29T07:11:08
| 2024-07-30T01:09:01
| 2024-07-29T20:20:26
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6039",
"html_url": "https://github.com/ollama/ollama/pull/6039",
"diff_url": "https://github.com/ollama/ollama/pull/6039.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6039.patch",
"merged_at": "2024-07-29T20:20:26"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6039/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3019
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3019/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3019/comments
|
https://api.github.com/repos/ollama/ollama/issues/3019/events
|
https://github.com/ollama/ollama/issues/3019
| 2,177,110,079
|
I_kwDOJ0Z1Ps6BxBA_
| 3,019
|
Automatic sub-language constraint sections
|
{
"login": "mirek",
"id": 8561,
"node_id": "MDQ6VXNlcjg1NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mirek",
"html_url": "https://github.com/mirek",
"followers_url": "https://api.github.com/users/mirek/followers",
"following_url": "https://api.github.com/users/mirek/following{/other_user}",
"gists_url": "https://api.github.com/users/mirek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mirek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mirek/subscriptions",
"organizations_url": "https://api.github.com/users/mirek/orgs",
"repos_url": "https://api.github.com/users/mirek/repos",
"events_url": "https://api.github.com/users/mirek/events{/privacy}",
"received_events_url": "https://api.github.com/users/mirek/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-03-09T07:34:20
| 2024-03-12T07:23:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be useful if ollama supported automatic, text based, plugin driven grammar sections.
Every time triple quote section is used in text it enters that language mode constraints, for example:
1. "```json" enters json bnf
2. "```json:Foo" enters json bnf + json schema for Foo object
3. "```python" enters python bnf
4. "```quote:documentRef" enters trie based constraints for quotes
5. "```whatever:whatever?" enters whatever
6. final "\n```" pops the mode
Available grammars could be:
1. server/model defined – avoiding network overhead
2. client defined – flexible under client side control (probably much better)
The api for plugin/handler could be:
1. character based – given text from beginning of opening "```foo" input it returns allowed characters, or
2. native model token based – given same input returns list of allowed tokens, or
3. generic suffix text – given same input returns list of allowed, arbitrarily sized completions (so plugin can adopt, ie. for wide fanout it can return list of short strings, ie. single characters, for less fanout it can produce longer output with possible single output optimization where llm doesn't need to be consulted <<if there is single completion allowed, no need to run it though the model at all>>), or
4. optimized trie output – given same input returns 3. in compressed, trie format.
Plugin can internally use whatever strategy it wants to conform to autocompletion interface (BNF, trie based index for document quotes or something else).
Heading can be uri based, ie. "```trie://mydb/docs?id=123" so plugins can register themselves as handlers for specific modes.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3019/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3019/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/985
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/985/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/985/comments
|
https://api.github.com/repos/ollama/ollama/issues/985/events
|
https://github.com/ollama/ollama/pull/985
| 1,975,506,789
|
PR_kwDOJ0Z1Ps5egBFc
| 985
|
restore runner build flags
|
{
"login": "yoshino-s",
"id": 28624661,
"node_id": "MDQ6VXNlcjI4NjI0NjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/28624661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoshino-s",
"html_url": "https://github.com/yoshino-s",
"followers_url": "https://api.github.com/users/yoshino-s/followers",
"following_url": "https://api.github.com/users/yoshino-s/following{/other_user}",
"gists_url": "https://api.github.com/users/yoshino-s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoshino-s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoshino-s/subscriptions",
"organizations_url": "https://api.github.com/users/yoshino-s/orgs",
"repos_url": "https://api.github.com/users/yoshino-s/repos",
"events_url": "https://api.github.com/users/yoshino-s/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoshino-s/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-11-03T06:01:13
| 2023-11-24T08:00:30
| 2023-11-14T16:52:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/985",
"html_url": "https://github.com/ollama/ollama/pull/985",
"diff_url": "https://github.com/ollama/ollama/pull/985.diff",
"patch_url": "https://github.com/ollama/ollama/pull/985.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/985/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5595
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5595/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5595/comments
|
https://api.github.com/repos/ollama/ollama/issues/5595/events
|
https://github.com/ollama/ollama/issues/5595
| 2,400,345,707
|
I_kwDOJ0Z1Ps6PEl5r
| 5,595
|
codegeex4
|
{
"login": "sinxyz",
"id": 32287704,
"node_id": "MDQ6VXNlcjMyMjg3NzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/32287704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinxyz",
"html_url": "https://github.com/sinxyz",
"followers_url": "https://api.github.com/users/sinxyz/followers",
"following_url": "https://api.github.com/users/sinxyz/following{/other_user}",
"gists_url": "https://api.github.com/users/sinxyz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sinxyz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinxyz/subscriptions",
"organizations_url": "https://api.github.com/users/sinxyz/orgs",
"repos_url": "https://api.github.com/users/sinxyz/repos",
"events_url": "https://api.github.com/users/sinxyz/events{/privacy}",
"received_events_url": "https://api.github.com/users/sinxyz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-07-10T10:21:38
| 2024-11-17T22:24:22
| 2024-11-17T22:24:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
can't use,output:GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5595/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2598
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2598/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2598/comments
|
https://api.github.com/repos/ollama/ollama/issues/2598/events
|
https://github.com/ollama/ollama/issues/2598
| 2,143,104,049
|
I_kwDOJ0Z1Ps5_vSwx
| 2,598
|
Add ROCm support on windows
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 16
| 2024-02-19T20:32:43
| 2024-03-27T05:51:10
| 2024-03-07T18:51:01
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Users with Radeon cards would like to be able to take advantage of the new native windows app and not have to resort to WSL2 to get support for their AMD GPUs.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2598/reactions",
"total_count": 12,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/ollama/ollama/issues/2598/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7481
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7481/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7481/comments
|
https://api.github.com/repos/ollama/ollama/issues/7481/events
|
https://github.com/ollama/ollama/issues/7481
| 2,631,299,461
|
I_kwDOJ0Z1Ps6c1nGF
| 7,481
|
[FEATURE REQUEST] - Add option to add code into the "Send a Message" prompt <>
|
{
"login": "BryanBond",
"id": 187150339,
"node_id": "U_kgDOCyewAw",
"avatar_url": "https://avatars.githubusercontent.com/u/187150339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BryanBond",
"html_url": "https://github.com/BryanBond",
"followers_url": "https://api.github.com/users/BryanBond/followers",
"following_url": "https://api.github.com/users/BryanBond/following{/other_user}",
"gists_url": "https://api.github.com/users/BryanBond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BryanBond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BryanBond/subscriptions",
"organizations_url": "https://api.github.com/users/BryanBond/orgs",
"repos_url": "https://api.github.com/users/BryanBond/repos",
"events_url": "https://api.github.com/users/BryanBond/events{/privacy}",
"received_events_url": "https://api.github.com/users/BryanBond/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-11-03T16:00:28
| 2024-11-05T03:52:51
| 2024-11-05T03:49:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I would love to see the addition of an 'add code <>' option to the "Send a Message" box in ollama. This would make inquiry formatting to the llm much cleaner and easier to interpret/read.
|
{
"login": "BryanBond",
"id": 187150339,
"node_id": "U_kgDOCyewAw",
"avatar_url": "https://avatars.githubusercontent.com/u/187150339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BryanBond",
"html_url": "https://github.com/BryanBond",
"followers_url": "https://api.github.com/users/BryanBond/followers",
"following_url": "https://api.github.com/users/BryanBond/following{/other_user}",
"gists_url": "https://api.github.com/users/BryanBond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BryanBond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BryanBond/subscriptions",
"organizations_url": "https://api.github.com/users/BryanBond/orgs",
"repos_url": "https://api.github.com/users/BryanBond/repos",
"events_url": "https://api.github.com/users/BryanBond/events{/privacy}",
"received_events_url": "https://api.github.com/users/BryanBond/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7481/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/1933
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1933/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1933/comments
|
https://api.github.com/repos/ollama/ollama/issues/1933/events
|
https://github.com/ollama/ollama/issues/1933
| 2,077,696,781
|
I_kwDOJ0Z1Ps571yMN
| 1,933
|
Wrong tag on dockerhub
|
{
"login": "otavio-silva",
"id": 22914610,
"node_id": "MDQ6VXNlcjIyOTE0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/22914610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/otavio-silva",
"html_url": "https://github.com/otavio-silva",
"followers_url": "https://api.github.com/users/otavio-silva/followers",
"following_url": "https://api.github.com/users/otavio-silva/following{/other_user}",
"gists_url": "https://api.github.com/users/otavio-silva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/otavio-silva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/otavio-silva/subscriptions",
"organizations_url": "https://api.github.com/users/otavio-silva/orgs",
"repos_url": "https://api.github.com/users/otavio-silva/repos",
"events_url": "https://api.github.com/users/otavio-silva/events{/privacy}",
"received_events_url": "https://api.github.com/users/otavio-silva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-01-11T22:17:15
| 2024-01-11T23:02:59
| 2024-01-11T23:02:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# Description
It seems latest version was released under the 0.0.0 tag (see https://hub.docker.com/r/ollama/ollama/tags and https://hub.docker.com/layers/ollama/ollama/0.0.0/images/sha256-720e093927cfaed71c70dcc70bd32f9c39be3937243ebd6ddcdce5016d5deb2b?context=explore) instead of 0.1.20 that is the correct number.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1933/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6335
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6335/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6335/comments
|
https://api.github.com/repos/ollama/ollama/issues/6335/events
|
https://github.com/ollama/ollama/issues/6335
| 2,462,885,777
|
I_kwDOJ0Z1Ps6SzKeR
| 6,335
|
Bug in Continuous Questioning and Output Content on Windows
|
{
"login": "Lucas-SJY",
"id": 72309268,
"node_id": "MDQ6VXNlcjcyMzA5MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/72309268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lucas-SJY",
"html_url": "https://github.com/Lucas-SJY",
"followers_url": "https://api.github.com/users/Lucas-SJY/followers",
"following_url": "https://api.github.com/users/Lucas-SJY/following{/other_user}",
"gists_url": "https://api.github.com/users/Lucas-SJY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lucas-SJY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lucas-SJY/subscriptions",
"organizations_url": "https://api.github.com/users/Lucas-SJY/orgs",
"repos_url": "https://api.github.com/users/Lucas-SJY/repos",
"events_url": "https://api.github.com/users/Lucas-SJY/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lucas-SJY/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-13T09:38:03
| 2024-09-05T19:05:34
| 2024-09-05T19:05:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I got the following issues on Windows.
In ollama 0.2.5, running llama3.1, It can not response for the second time and returned the following error message "Error: template: :28:7: executing "" at <.ToolCalls>: can't evaluate field ToolCalls in type *api.Message", and it sometimes did not return any outputs. Everything was showed in the following screenshot.

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.5
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6335/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1034
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1034/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1034/comments
|
https://api.github.com/repos/ollama/ollama/issues/1034/events
|
https://github.com/ollama/ollama/pull/1034
| 1,981,783,166
|
PR_kwDOJ0Z1Ps5e1FqF
| 1,034
|
Fix sudo variable in install.sh
|
{
"login": "upchui",
"id": 24575829,
"node_id": "MDQ6VXNlcjI0NTc1ODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/24575829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/upchui",
"html_url": "https://github.com/upchui",
"followers_url": "https://api.github.com/users/upchui/followers",
"following_url": "https://api.github.com/users/upchui/following{/other_user}",
"gists_url": "https://api.github.com/users/upchui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/upchui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/upchui/subscriptions",
"organizations_url": "https://api.github.com/users/upchui/orgs",
"repos_url": "https://api.github.com/users/upchui/repos",
"events_url": "https://api.github.com/users/upchui/events{/privacy}",
"received_events_url": "https://api.github.com/users/upchui/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-07T16:42:52
| 2023-11-07T17:59:58
| 2023-11-07T17:59:57
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1034",
"html_url": "https://github.com/ollama/ollama/pull/1034",
"diff_url": "https://github.com/ollama/ollama/pull/1034.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1034.patch",
"merged_at": "2023-11-07T17:59:57"
}
|
It was forgotten to replace sudo at one place with the variable for sudo.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1034/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6824
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6824/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6824/comments
|
https://api.github.com/repos/ollama/ollama/issues/6824/events
|
https://github.com/ollama/ollama/issues/6824
| 2,528,136,689
|
I_kwDOJ0Z1Ps6WsE3x
| 6,824
|
How to remove this
|
{
"login": "lezi-fun",
"id": 177434121,
"node_id": "U_kgDOCpNuCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/177434121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lezi-fun",
"html_url": "https://github.com/lezi-fun",
"followers_url": "https://api.github.com/users/lezi-fun/followers",
"following_url": "https://api.github.com/users/lezi-fun/following{/other_user}",
"gists_url": "https://api.github.com/users/lezi-fun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lezi-fun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lezi-fun/subscriptions",
"organizations_url": "https://api.github.com/users/lezi-fun/orgs",
"repos_url": "https://api.github.com/users/lezi-fun/repos",
"events_url": "https://api.github.com/users/lezi-fun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lezi-fun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-09-16T10:57:56
| 2024-09-16T11:03:27
| 2024-09-16T11:03:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
How to remove this
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "lezi-fun",
"id": 177434121,
"node_id": "U_kgDOCpNuCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/177434121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lezi-fun",
"html_url": "https://github.com/lezi-fun",
"followers_url": "https://api.github.com/users/lezi-fun/followers",
"following_url": "https://api.github.com/users/lezi-fun/following{/other_user}",
"gists_url": "https://api.github.com/users/lezi-fun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lezi-fun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lezi-fun/subscriptions",
"organizations_url": "https://api.github.com/users/lezi-fun/orgs",
"repos_url": "https://api.github.com/users/lezi-fun/repos",
"events_url": "https://api.github.com/users/lezi-fun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lezi-fun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6824/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8088
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8088/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8088/comments
|
https://api.github.com/repos/ollama/ollama/issues/8088/events
|
https://github.com/ollama/ollama/issues/8088
| 2,738,574,153
|
I_kwDOJ0Z1Ps6jO1NJ
| 8,088
|
pull error EOF with gemma2:27b-instruct-q8_0
|
{
"login": "rcanand",
"id": 303900,
"node_id": "MDQ6VXNlcjMwMzkwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/303900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcanand",
"html_url": "https://github.com/rcanand",
"followers_url": "https://api.github.com/users/rcanand/followers",
"following_url": "https://api.github.com/users/rcanand/following{/other_user}",
"gists_url": "https://api.github.com/users/rcanand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcanand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcanand/subscriptions",
"organizations_url": "https://api.github.com/users/rcanand/orgs",
"repos_url": "https://api.github.com/users/rcanand/repos",
"events_url": "https://api.github.com/users/rcanand/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcanand/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 15
| 2024-12-13T14:51:46
| 2024-12-14T16:39:22
| 2024-12-14T16:39:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I call `ollama pull gemma2:27b-instruct-q8_0`, I get error `EOF`.
I have pulled other models successfully (including other gemma2 models) from the same system. And I have disk space etc. - running into this issue with just this model.
Based on web search, I suspect the file on the server itself is invalid or corrupted.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.1
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8088/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2906
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2906/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2906/comments
|
https://api.github.com/repos/ollama/ollama/issues/2906/events
|
https://github.com/ollama/ollama/issues/2906
| 2,165,830,857
|
I_kwDOJ0Z1Ps6BF_TJ
| 2,906
|
chat api stuck when using two ChatOllama same time
|
{
"login": "levin8023",
"id": 30230347,
"node_id": "MDQ6VXNlcjMwMjMwMzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/30230347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/levin8023",
"html_url": "https://github.com/levin8023",
"followers_url": "https://api.github.com/users/levin8023/followers",
"following_url": "https://api.github.com/users/levin8023/following{/other_user}",
"gists_url": "https://api.github.com/users/levin8023/gists{/gist_id}",
"starred_url": "https://api.github.com/users/levin8023/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/levin8023/subscriptions",
"organizations_url": "https://api.github.com/users/levin8023/orgs",
"repos_url": "https://api.github.com/users/levin8023/repos",
"events_url": "https://api.github.com/users/levin8023/events{/privacy}",
"received_events_url": "https://api.github.com/users/levin8023/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-04T03:10:15
| 2024-05-15T01:04:56
| 2024-05-15T01:04:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
use langchain for testing llm, when two client connect to ollama for chat api response, it stuck with same following code:
`ChatOllama(model=xxx, base_url=xxx, verbose=True, temperature=0, num_ctx=2048)` (same model)
and i have to restart ollama server, is there any solutions to use ollama chat api for more then 1 client same time ?

|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2906/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8361
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8361/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8361/comments
|
https://api.github.com/repos/ollama/ollama/issues/8361/events
|
https://github.com/ollama/ollama/issues/8361
| 2,777,280,036
|
I_kwDOJ0Z1Ps6lie4k
| 8,361
|
llama3.1-8B doesn't utilize my gpu
|
{
"login": "sunday-hao",
"id": 127651124,
"node_id": "U_kgDOB5vNNA",
"avatar_url": "https://avatars.githubusercontent.com/u/127651124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunday-hao",
"html_url": "https://github.com/sunday-hao",
"followers_url": "https://api.github.com/users/sunday-hao/followers",
"following_url": "https://api.github.com/users/sunday-hao/following{/other_user}",
"gists_url": "https://api.github.com/users/sunday-hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunday-hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunday-hao/subscriptions",
"organizations_url": "https://api.github.com/users/sunday-hao/orgs",
"repos_url": "https://api.github.com/users/sunday-hao/repos",
"events_url": "https://api.github.com/users/sunday-hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunday-hao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 10
| 2025-01-09T09:26:11
| 2025-01-10T04:08:55
| 2025-01-10T03:38:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when I tried to run llama3.1-8B-Instruct, it just didn't utilize my GPU and only utilize my CPU, so the speed is very slow. However, the server log said that ollama server detected my gpu, and move my model to my gpu. Could anyone help me? And I write output of `nvidia-smi` and the server log as follows. thanks a lot.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "sunday-hao",
"id": 127651124,
"node_id": "U_kgDOB5vNNA",
"avatar_url": "https://avatars.githubusercontent.com/u/127651124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunday-hao",
"html_url": "https://github.com/sunday-hao",
"followers_url": "https://api.github.com/users/sunday-hao/followers",
"following_url": "https://api.github.com/users/sunday-hao/following{/other_user}",
"gists_url": "https://api.github.com/users/sunday-hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunday-hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunday-hao/subscriptions",
"organizations_url": "https://api.github.com/users/sunday-hao/orgs",
"repos_url": "https://api.github.com/users/sunday-hao/repos",
"events_url": "https://api.github.com/users/sunday-hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunday-hao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8361/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8686
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8686/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8686/comments
|
https://api.github.com/repos/ollama/ollama/issues/8686/events
|
https://github.com/ollama/ollama/issues/8686
| 2,820,001,072
|
I_kwDOJ0Z1Ps6oFc0w
| 8,686
|
Support Deepseek Janus Pro Series (7B & 1B)
|
{
"login": "zytoh0",
"id": 90326544,
"node_id": "MDQ6VXNlcjkwMzI2NTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/90326544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zytoh0",
"html_url": "https://github.com/zytoh0",
"followers_url": "https://api.github.com/users/zytoh0/followers",
"following_url": "https://api.github.com/users/zytoh0/following{/other_user}",
"gists_url": "https://api.github.com/users/zytoh0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zytoh0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zytoh0/subscriptions",
"organizations_url": "https://api.github.com/users/zytoh0/orgs",
"repos_url": "https://api.github.com/users/zytoh0/repos",
"events_url": "https://api.github.com/users/zytoh0/events{/privacy}",
"received_events_url": "https://api.github.com/users/zytoh0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 2
| 2025-01-30T06:17:54
| 2025-01-30T08:28:58
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, good day to you all. I would like to request that ollama add support for Deepseek Janus Pro Series (currently only 7B & 1B):
1. https://huggingface.co/deepseek-ai/Janus-Pro-1B
2. https://huggingface.co/deepseek-ai/Janus-Pro-7B
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8686/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8686/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2261
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2261/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2261/comments
|
https://api.github.com/repos/ollama/ollama/issues/2261/events
|
https://github.com/ollama/ollama/issues/2261
| 2,106,395,396
|
I_kwDOJ0Z1Ps59jQsE
| 2,261
|
:link: Documentation request - Please add HF model url on `codellama` model page :pray:
|
{
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/followers",
"following_url": "https://api.github.com/users/adriens/following{/other_user}",
"gists_url": "https://api.github.com/users/adriens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adriens/subscriptions",
"organizations_url": "https://api.github.com/users/adriens/orgs",
"repos_url": "https://api.github.com/users/adriens/repos",
"events_url": "https://api.github.com/users/adriens/events{/privacy}",
"received_events_url": "https://api.github.com/users/adriens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-01-29T20:49:16
| 2024-05-11T20:17:47
| 2024-05-10T23:34:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# :grey_question: About
[`codellama` has just been released with it 70B version](https://twitter.com/ollama/status/1752034262101205450)

:point_right: ... but on its `[ollama` library page](https://ollama.ai/library/codellama) the is no HF url:

# :pray: Documentation request
- If relatable, add the https://huggingface.co/codellama to the "More information" section:

## :bookmark_tabs: Links
- https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf
- https://huggingface.co/codellama
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2261/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5937
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5937/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5937/comments
|
https://api.github.com/repos/ollama/ollama/issues/5937/events
|
https://github.com/ollama/ollama/issues/5937
| 2,428,808,383
|
I_kwDOJ0Z1Ps6QxKy_
| 5,937
|
Request to add PyOllaMx to the community integration list under Web/Desktop Category
|
{
"login": "kspviswa",
"id": 7476271,
"node_id": "MDQ6VXNlcjc0NzYyNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7476271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kspviswa",
"html_url": "https://github.com/kspviswa",
"followers_url": "https://api.github.com/users/kspviswa/followers",
"following_url": "https://api.github.com/users/kspviswa/following{/other_user}",
"gists_url": "https://api.github.com/users/kspviswa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kspviswa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kspviswa/subscriptions",
"organizations_url": "https://api.github.com/users/kspviswa/orgs",
"repos_url": "https://api.github.com/users/kspviswa/repos",
"events_url": "https://api.github.com/users/kspviswa/events{/privacy}",
"received_events_url": "https://api.github.com/users/kspviswa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-25T02:17:40
| 2024-09-04T03:05:22
| 2024-09-04T01:59:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Project details : https://github.com/kspviswa/pyOllaMx/
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5937/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3805
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3805/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3805/comments
|
https://api.github.com/repos/ollama/ollama/issues/3805/events
|
https://github.com/ollama/ollama/pull/3805
| 2,255,247,570
|
PR_kwDOJ0Z1Ps5tSFeH
| 3,805
|
♻️ refactor: update langchain-python-simple to use the langchain_community
|
{
"login": "dkruyt",
"id": 713812,
"node_id": "MDQ6VXNlcjcxMzgxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/713812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkruyt",
"html_url": "https://github.com/dkruyt",
"followers_url": "https://api.github.com/users/dkruyt/followers",
"following_url": "https://api.github.com/users/dkruyt/following{/other_user}",
"gists_url": "https://api.github.com/users/dkruyt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dkruyt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkruyt/subscriptions",
"organizations_url": "https://api.github.com/users/dkruyt/orgs",
"repos_url": "https://api.github.com/users/dkruyt/repos",
"events_url": "https://api.github.com/users/dkruyt/events{/privacy}",
"received_events_url": "https://api.github.com/users/dkruyt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-04-21T20:22:30
| 2024-11-21T11:05:54
| 2024-11-21T11:05:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3805",
"html_url": "https://github.com/ollama/ollama/pull/3805",
"diff_url": "https://github.com/ollama/ollama/pull/3805.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3805.patch",
"merged_at": null
}
|
* import Ollama from langchain.llms is deprecated, change to langchain_community
* predict is deprecated, use invoke
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3805/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/644
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/644/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/644/comments
|
https://api.github.com/repos/ollama/ollama/issues/644/events
|
https://github.com/ollama/ollama/issues/644
| 1,918,748,305
|
I_kwDOJ0Z1Ps5yXcaR
| 644
|
error: illegal instruction on CPUs without AVX or AVX2 instruction sets
|
{
"login": "jacoboglez",
"id": 31385011,
"node_id": "MDQ6VXNlcjMxMzg1MDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/31385011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacoboglez",
"html_url": "https://github.com/jacoboglez",
"followers_url": "https://api.github.com/users/jacoboglez/followers",
"following_url": "https://api.github.com/users/jacoboglez/following{/other_user}",
"gists_url": "https://api.github.com/users/jacoboglez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacoboglez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacoboglez/subscriptions",
"organizations_url": "https://api.github.com/users/jacoboglez/orgs",
"repos_url": "https://api.github.com/users/jacoboglez/repos",
"events_url": "https://api.github.com/users/jacoboglez/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacoboglez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 14
| 2023-09-29T07:33:17
| 2024-10-07T17:16:59
| 2023-10-28T19:24:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was testing the Ollama release for WSL and I could not get any model running.
I installed it as indicated in the website:
`curl https://ollama.ai/install.sh | sh`
I got the server running correctly, and the model was download properly.
Finally, when trying to run the model (`ollama run llama2`) I got the following error on the server:
```
2023/09/29 09:11:12 llama.go:310: starting llama runner
2023/09/29 09:11:12 llama.go:346: waiting for llama runner to start responding
2023/09/29 09:11:12 llama.go:320: llama runner exited with error: signal: illegal instruction
```
I was trying to run it in Ubuntu-22.04 (WSL version 2). Processor: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/644/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/644/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5980
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5980/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5980/comments
|
https://api.github.com/repos/ollama/ollama/issues/5980/events
|
https://github.com/ollama/ollama/issues/5980
| 2,431,938,432
|
I_kwDOJ0Z1Ps6Q9G-A
| 5,980
|
Context in /api/generate response grows too big.
|
{
"login": "slouffka",
"id": 8129,
"node_id": "MDQ6VXNlcjgxMjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slouffka",
"html_url": "https://github.com/slouffka",
"followers_url": "https://api.github.com/users/slouffka/followers",
"following_url": "https://api.github.com/users/slouffka/following{/other_user}",
"gists_url": "https://api.github.com/users/slouffka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slouffka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slouffka/subscriptions",
"organizations_url": "https://api.github.com/users/slouffka/orgs",
"repos_url": "https://api.github.com/users/slouffka/repos",
"events_url": "https://api.github.com/users/slouffka/events{/privacy}",
"received_events_url": "https://api.github.com/users/slouffka/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-07-26T10:31:22
| 2024-11-21T12:47:42
| 2024-08-01T22:14:00
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm coding my own Chat UI for Ollama and using context feature to implement dialog mode. So every time Ollama generates a response the returned context (embeddings) is saved into chat object. On the next prompt this context is passed into `/api/generate` then after response resulting context is saved into chat object again.
After upgrading to latest Ollama I've noticed generation speed degraded considerably and the context returned by `/api/generate` grows too fast compared to previous versions.
Looks like it doubles context size after each generation and soon in relatively small chat with 26 messages it becomes like 3-7Mb in size which causes my UI being unresponsive and also browser freezes because it has to process such a huge amount of data (mostly for debugging like converting JSON to string, but this is not normal anyway). When earlier (at least for the 0.2.1 version I've used) it could be around 8-16Kb which is totally fine and also fits model capacity.
This is pretty hard to measure (and I don't know how to) but I've also noticed that with latest Ollama newer models like gemma2 or llama3.1 do not adhere to context as well as some older models like mistral on earlier Ollama version. This could be related to a context changes, which was broken since 0.2.2 then response was fixed but it looks like the fix was not completely correct.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.0
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5980/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/5980/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3474
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3474/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3474/comments
|
https://api.github.com/repos/ollama/ollama/issues/3474/events
|
https://github.com/ollama/ollama/issues/3474
| 2,222,766,896
|
I_kwDOJ0Z1Ps6EfLsw
| 3,474
|
ollama process exit but llama.cpp process remains as a zombie process
|
{
"login": "mofanke",
"id": 54242816,
"node_id": "MDQ6VXNlcjU0MjQyODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/54242816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mofanke",
"html_url": "https://github.com/mofanke",
"followers_url": "https://api.github.com/users/mofanke/followers",
"following_url": "https://api.github.com/users/mofanke/following{/other_user}",
"gists_url": "https://api.github.com/users/mofanke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mofanke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mofanke/subscriptions",
"organizations_url": "https://api.github.com/users/mofanke/orgs",
"repos_url": "https://api.github.com/users/mofanke/repos",
"events_url": "https://api.github.com/users/mofanke/events{/privacy}",
"received_events_url": "https://api.github.com/users/mofanke/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-04-03T12:12:52
| 2024-06-13T21:26:16
| 2024-04-28T18:58:54
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?

then i killed the ollama process

### What did you expect to see?
llama.cpp process exit as ollama process exit
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3474/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7228
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7228/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7228/comments
|
https://api.github.com/repos/ollama/ollama/issues/7228/events
|
https://github.com/ollama/ollama/issues/7228
| 2,592,583,406
|
I_kwDOJ0Z1Ps6ah67u
| 7,228
|
Llama-3.1-Nemotron-70B
|
{
"login": "nonetrix",
"id": 45698918,
"node_id": "MDQ6VXNlcjQ1Njk4OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonetrix",
"html_url": "https://github.com/nonetrix",
"followers_url": "https://api.github.com/users/nonetrix/followers",
"following_url": "https://api.github.com/users/nonetrix/following{/other_user}",
"gists_url": "https://api.github.com/users/nonetrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nonetrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nonetrix/subscriptions",
"organizations_url": "https://api.github.com/users/nonetrix/orgs",
"repos_url": "https://api.github.com/users/nonetrix/repos",
"events_url": "https://api.github.com/users/nonetrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/nonetrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-10-16T17:33:14
| 2024-10-16T22:02:18
| 2024-10-16T22:02:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Seems to just be llama 3.1 with ChatML prompt format(?) should be easy to add and seems to beat 4o on some benchmarks... We will see how that actually plays out, but it seems really good for me
https://huggingface.co/collections/nvidia/llama-31-nemotron-70b-670e93cd366feea16abc13d8
|
{
"login": "nonetrix",
"id": 45698918,
"node_id": "MDQ6VXNlcjQ1Njk4OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonetrix",
"html_url": "https://github.com/nonetrix",
"followers_url": "https://api.github.com/users/nonetrix/followers",
"following_url": "https://api.github.com/users/nonetrix/following{/other_user}",
"gists_url": "https://api.github.com/users/nonetrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nonetrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nonetrix/subscriptions",
"organizations_url": "https://api.github.com/users/nonetrix/orgs",
"repos_url": "https://api.github.com/users/nonetrix/repos",
"events_url": "https://api.github.com/users/nonetrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/nonetrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7228/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6322
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6322/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6322/comments
|
https://api.github.com/repos/ollama/ollama/issues/6322/events
|
https://github.com/ollama/ollama/issues/6322
| 2,461,455,113
|
I_kwDOJ0Z1Ps6SttMJ
| 6,322
|
Why role must be "system" or "user" or "assistant"? How can I add a custom role like "tool"?
|
{
"login": "zhangsheng377",
"id": 3692247,
"node_id": "MDQ6VXNlcjM2OTIyNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3692247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangsheng377",
"html_url": "https://github.com/zhangsheng377",
"followers_url": "https://api.github.com/users/zhangsheng377/followers",
"following_url": "https://api.github.com/users/zhangsheng377/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangsheng377/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangsheng377/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangsheng377/subscriptions",
"organizations_url": "https://api.github.com/users/zhangsheng377/orgs",
"repos_url": "https://api.github.com/users/zhangsheng377/repos",
"events_url": "https://api.github.com/users/zhangsheng377/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangsheng377/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 13
| 2024-08-12T16:39:06
| 2024-09-04T16:11:44
| 2024-09-04T04:25:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/ollama/ollama/blob/15c2d8fe149ba2b58aadbab615a6955f8821c7a9/parser/parser.go#L294
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6322/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8392
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8392/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8392/comments
|
https://api.github.com/repos/ollama/ollama/issues/8392/events
|
https://github.com/ollama/ollama/issues/8392
| 2,782,317,383
|
I_kwDOJ0Z1Ps6l1stH
| 8,392
|
Empty 'assistant' message
|
{
"login": "pulinagrawal",
"id": 8232040,
"node_id": "MDQ6VXNlcjgyMzIwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8232040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pulinagrawal",
"html_url": "https://github.com/pulinagrawal",
"followers_url": "https://api.github.com/users/pulinagrawal/followers",
"following_url": "https://api.github.com/users/pulinagrawal/following{/other_user}",
"gists_url": "https://api.github.com/users/pulinagrawal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pulinagrawal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pulinagrawal/subscriptions",
"organizations_url": "https://api.github.com/users/pulinagrawal/orgs",
"repos_url": "https://api.github.com/users/pulinagrawal/repos",
"events_url": "https://api.github.com/users/pulinagrawal/events{/privacy}",
"received_events_url": "https://api.github.com/users/pulinagrawal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2025-01-12T06:45:37
| 2025-01-13T19:25:57
| 2025-01-13T19:25:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
with the following python code
```
>>> import ollama
>>> my = {
... "model": "llama3.2",
... "options": {
... "temperature": 0
... },
... "messages": [{"role": "system", "content": "You are a DnD Dungeon Master. Say something in your first message to the user."}
... ],
... "tools": [
... {
... "type": "function",
... "function": {
... "name": "roll_for_action",
... "description": "Checks a dice roll for a skill with a given difficulty class",
... "parameters": {
... "type": "object",
... "properties": {"n_dice": {"type": "integer"},
... "sides": {"type": "integer"},
... "skill": {"type": "string"},
... "dc": {"type": "integer"},
... "player": {"type": "string"}
... }
... }
... }
... }]
... }
>>> response = ollama.chat(**my)
>>> print(response)
model='llama3.2' created_at='2025-01-12T06:38:51.757532Z' done=True done_reason='stop' total_duration=1349066625 load_duration=16419958 prompt_eval_count=66 prompt_eval_duration=1330000000 eval_count=1 eval_duration=1000000 message=Message(role='assistant', content='', images=None, tool_calls=None)
```
The response content is empty.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.4
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8392/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2651
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2651/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2651/comments
|
https://api.github.com/repos/ollama/ollama/issues/2651/events
|
https://github.com/ollama/ollama/issues/2651
| 2,147,557,948
|
I_kwDOJ0Z1Ps6AASI8
| 2,651
|
Download Monitoring Error
|
{
"login": "crimson206",
"id": 110409356,
"node_id": "U_kgDOBpS2jA",
"avatar_url": "https://avatars.githubusercontent.com/u/110409356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crimson206",
"html_url": "https://github.com/crimson206",
"followers_url": "https://api.github.com/users/crimson206/followers",
"following_url": "https://api.github.com/users/crimson206/following{/other_user}",
"gists_url": "https://api.github.com/users/crimson206/gists{/gist_id}",
"starred_url": "https://api.github.com/users/crimson206/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/crimson206/subscriptions",
"organizations_url": "https://api.github.com/users/crimson206/orgs",
"repos_url": "https://api.github.com/users/crimson206/repos",
"events_url": "https://api.github.com/users/crimson206/events{/privacy}",
"received_events_url": "https://api.github.com/users/crimson206/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-21T19:49:46
| 2024-02-21T23:23:54
| 2024-02-21T23:23:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2651/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/3518
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3518/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3518/comments
|
https://api.github.com/repos/ollama/ollama/issues/3518/events
|
https://github.com/ollama/ollama/pull/3518
| 2,229,492,472
|
PR_kwDOJ0Z1Ps5r674h
| 3,518
|
ignore vscode debug build
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-07T01:01:04
| 2024-04-23T00:47:32
| 2024-04-23T00:47:32
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3518",
"html_url": "https://github.com/ollama/ollama/pull/3518",
"diff_url": "https://github.com/ollama/ollama/pull/3518.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3518.patch",
"merged_at": null
}
|
Prevent this from accidentally getting added to the repo history.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3518/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/223
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/223/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/223/comments
|
https://api.github.com/repos/ollama/ollama/issues/223/events
|
https://github.com/ollama/ollama/pull/223
| 1,823,487,815
|
PR_kwDOJ0Z1Ps5Wf9tc
| 223
|
show system/template/license layers from cmd prompt
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-27T02:07:07
| 2023-07-27T23:58:41
| 2023-07-27T23:58:40
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/223",
"html_url": "https://github.com/ollama/ollama/pull/223",
"diff_url": "https://github.com/ollama/ollama/pull/223.diff",
"patch_url": "https://github.com/ollama/ollama/pull/223.patch",
"merged_at": "2023-07-27T23:58:40"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/223/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5094
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5094/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5094/comments
|
https://api.github.com/repos/ollama/ollama/issues/5094/events
|
https://github.com/ollama/ollama/issues/5094
| 2,356,504,379
|
I_kwDOJ0Z1Ps6MdWc7
| 5,094
|
No "Restart to update" option for Windows auto update
|
{
"login": "vootox",
"id": 27273724,
"node_id": "MDQ6VXNlcjI3MjczNzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/27273724?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vootox",
"html_url": "https://github.com/vootox",
"followers_url": "https://api.github.com/users/vootox/followers",
"following_url": "https://api.github.com/users/vootox/following{/other_user}",
"gists_url": "https://api.github.com/users/vootox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vootox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vootox/subscriptions",
"organizations_url": "https://api.github.com/users/vootox/orgs",
"repos_url": "https://api.github.com/users/vootox/repos",
"events_url": "https://api.github.com/users/vootox/events{/privacy}",
"received_events_url": "https://api.github.com/users/vootox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-06-17T06:33:51
| 2024-06-19T16:32:34
| 2024-06-19T16:32:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Popup says there is an Ollama update; I find I must _Click on the taskbar or menubar item and click "Restart to update" to apply the update._ But, I only see `View Log` and `Quit Ollama` and no `Restart to update`. The logs do appear to have installed the update so
I guess it's being done automatically. I'd rather the message stated that or allowed me to do it.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5094/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5094/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4305
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4305/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4305/comments
|
https://api.github.com/repos/ollama/ollama/issues/4305/events
|
https://github.com/ollama/ollama/pull/4305
| 2,288,618,125
|
PR_kwDOJ0Z1Ps5vCjmF
| 4,305
|
fix typo
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-09T23:23:48
| 2024-05-09T23:42:10
| 2024-05-09T23:42:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4305",
"html_url": "https://github.com/ollama/ollama/pull/4305",
"diff_url": "https://github.com/ollama/ollama/pull/4305.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4305.patch",
"merged_at": "2024-05-09T23:42:10"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4305/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4737
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4737/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4737/comments
|
https://api.github.com/repos/ollama/ollama/issues/4737/events
|
https://github.com/ollama/ollama/pull/4737
| 2,326,726,821
|
PR_kwDOJ0Z1Ps5xEW3q
| 4,737
|
only generate on relevant changes
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-30T23:54:28
| 2024-05-31T00:17:51
| 2024-05-31T00:17:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4737",
"html_url": "https://github.com/ollama/ollama/pull/4737",
"diff_url": "https://github.com/ollama/ollama/pull/4737.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4737.patch",
"merged_at": "2024-05-31T00:17:50"
}
|
relevant change include changes to c++, generate scripts or the llama.cpp submodule
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4737/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4193
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4193/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4193/comments
|
https://api.github.com/repos/ollama/ollama/issues/4193/events
|
https://github.com/ollama/ollama/issues/4193
| 2,279,952,316
|
I_kwDOJ0Z1Ps6H5U-8
| 4,193
|
mixtral:8x22b has missing weights
|
{
"login": "codebam",
"id": 6035884,
"node_id": "MDQ6VXNlcjYwMzU4ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6035884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codebam",
"html_url": "https://github.com/codebam",
"followers_url": "https://api.github.com/users/codebam/followers",
"following_url": "https://api.github.com/users/codebam/following{/other_user}",
"gists_url": "https://api.github.com/users/codebam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codebam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codebam/subscriptions",
"organizations_url": "https://api.github.com/users/codebam/orgs",
"repos_url": "https://api.github.com/users/codebam/repos",
"events_url": "https://api.github.com/users/codebam/events{/privacy}",
"received_events_url": "https://api.github.com/users/codebam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-06T03:36:27
| 2024-05-06T18:31:39
| 2024-05-06T18:31:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
❯ ollama run mixtral:8x22b
Error: exception create_tensor: tensor 'blk.0.ffn_gate.0.weight' not found
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.31
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4193/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8537
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8537/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8537/comments
|
https://api.github.com/repos/ollama/ollama/issues/8537/events
|
https://github.com/ollama/ollama/issues/8537
| 2,804,977,080
|
I_kwDOJ0Z1Ps6nMI24
| 8,537
|
Ollama stops giving outputs after a few runs
|
{
"login": "mansibm6",
"id": 63543775,
"node_id": "MDQ6VXNlcjYzNTQzNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/63543775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mansibm6",
"html_url": "https://github.com/mansibm6",
"followers_url": "https://api.github.com/users/mansibm6/followers",
"following_url": "https://api.github.com/users/mansibm6/following{/other_user}",
"gists_url": "https://api.github.com/users/mansibm6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mansibm6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mansibm6/subscriptions",
"organizations_url": "https://api.github.com/users/mansibm6/orgs",
"repos_url": "https://api.github.com/users/mansibm6/repos",
"events_url": "https://api.github.com/users/mansibm6/events{/privacy}",
"received_events_url": "https://api.github.com/users/mansibm6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2025-01-22T17:37:05
| 2025-01-22T20:40:21
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I've been trying to run "smallthinker" and "llama3.2:1b", but after around 30 runs, the models stop giving outputs. However, ollama is running with 100% CPU in the background on my Mac.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8537/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/460
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/460/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/460/comments
|
https://api.github.com/repos/ollama/ollama/issues/460/events
|
https://github.com/ollama/ollama/issues/460
| 1,879,040,158
|
I_kwDOJ0Z1Ps5v_-Ce
| 460
|
404 Client Error: Not Found for url: https://ollama.ai/api/models when running the model
|
{
"login": "Satyam7166-tech",
"id": 62897696,
"node_id": "MDQ6VXNlcjYyODk3Njk2",
"avatar_url": "https://avatars.githubusercontent.com/u/62897696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Satyam7166-tech",
"html_url": "https://github.com/Satyam7166-tech",
"followers_url": "https://api.github.com/users/Satyam7166-tech/followers",
"following_url": "https://api.github.com/users/Satyam7166-tech/following{/other_user}",
"gists_url": "https://api.github.com/users/Satyam7166-tech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Satyam7166-tech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Satyam7166-tech/subscriptions",
"organizations_url": "https://api.github.com/users/Satyam7166-tech/orgs",
"repos_url": "https://api.github.com/users/Satyam7166-tech/repos",
"events_url": "https://api.github.com/users/Satyam7166-tech/events{/privacy}",
"received_events_url": "https://api.github.com/users/Satyam7166-tech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-03T10:25:06
| 2023-09-03T13:52:27
| 2023-09-03T13:51:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This was working for me yesterday but is giving me this error after restart. My Ollama server is on
<img width="589" alt="image" src="https://github.com/jmorganca/ollama/assets/62897696/883387f7-6e19-4b09-abd8-38d717122bda">
System: Mac m1 pro.
Also, I tried this on a different user on my mac and it works.
I can also connect it to langchain and a vector database
<img width="722" alt="image" src="https://github.com/jmorganca/ollama/assets/62897696/8f994d7a-29ad-49fc-a2c3-b42179c553de">
But specifying any other model except the ones I had already pulled is givng me this error:
`raise ValueError(
ValueError: Ollama call failed with status code 400. Details: stat /Users/satyam7166/.ollama/models/manifests/registry.ollama.ai/library/llama2/13b: no such file or directory`
PS: I love this project as its the only one I've found that can properly utilize mps while connecting to my Vector Database. However, I am a complete beginner and if you need any other information, please ask.
|
{
"login": "Satyam7166-tech",
"id": 62897696,
"node_id": "MDQ6VXNlcjYyODk3Njk2",
"avatar_url": "https://avatars.githubusercontent.com/u/62897696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Satyam7166-tech",
"html_url": "https://github.com/Satyam7166-tech",
"followers_url": "https://api.github.com/users/Satyam7166-tech/followers",
"following_url": "https://api.github.com/users/Satyam7166-tech/following{/other_user}",
"gists_url": "https://api.github.com/users/Satyam7166-tech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Satyam7166-tech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Satyam7166-tech/subscriptions",
"organizations_url": "https://api.github.com/users/Satyam7166-tech/orgs",
"repos_url": "https://api.github.com/users/Satyam7166-tech/repos",
"events_url": "https://api.github.com/users/Satyam7166-tech/events{/privacy}",
"received_events_url": "https://api.github.com/users/Satyam7166-tech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/460/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4650
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4650/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4650/comments
|
https://api.github.com/repos/ollama/ollama/issues/4650/events
|
https://github.com/ollama/ollama/issues/4650
| 2,317,818,461
|
I_kwDOJ0Z1Ps6KJxpd
| 4,650
|
BCEmbedding model support
|
{
"login": "laipz8200",
"id": 16485841,
"node_id": "MDQ6VXNlcjE2NDg1ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16485841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laipz8200",
"html_url": "https://github.com/laipz8200",
"followers_url": "https://api.github.com/users/laipz8200/followers",
"following_url": "https://api.github.com/users/laipz8200/following{/other_user}",
"gists_url": "https://api.github.com/users/laipz8200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laipz8200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laipz8200/subscriptions",
"organizations_url": "https://api.github.com/users/laipz8200/orgs",
"repos_url": "https://api.github.com/users/laipz8200/repos",
"events_url": "https://api.github.com/users/laipz8200/events{/privacy}",
"received_events_url": "https://api.github.com/users/laipz8200/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-26T16:10:17
| 2024-05-26T23:54:42
| 2024-05-26T23:54:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I would like to request support for the [BCEmbedding](https://github.com/netease-youdao/BCEmbedding) model, which is an embedding model that performs exceptionally well in both Chinese and English.
Thank you very much for your work.
|
{
"login": "laipz8200",
"id": 16485841,
"node_id": "MDQ6VXNlcjE2NDg1ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16485841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laipz8200",
"html_url": "https://github.com/laipz8200",
"followers_url": "https://api.github.com/users/laipz8200/followers",
"following_url": "https://api.github.com/users/laipz8200/following{/other_user}",
"gists_url": "https://api.github.com/users/laipz8200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laipz8200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laipz8200/subscriptions",
"organizations_url": "https://api.github.com/users/laipz8200/orgs",
"repos_url": "https://api.github.com/users/laipz8200/repos",
"events_url": "https://api.github.com/users/laipz8200/events{/privacy}",
"received_events_url": "https://api.github.com/users/laipz8200/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4650/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/970
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/970/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/970/comments
|
https://api.github.com/repos/ollama/ollama/issues/970/events
|
https://github.com/ollama/ollama/issues/970
| 1,973,900,109
|
I_kwDOJ0Z1Ps51p1NN
| 970
|
problem on last release
|
{
"login": "francescoagati",
"id": 175524,
"node_id": "MDQ6VXNlcjE3NTUyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/175524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francescoagati",
"html_url": "https://github.com/francescoagati",
"followers_url": "https://api.github.com/users/francescoagati/followers",
"following_url": "https://api.github.com/users/francescoagati/following{/other_user}",
"gists_url": "https://api.github.com/users/francescoagati/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francescoagati/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francescoagati/subscriptions",
"organizations_url": "https://api.github.com/users/francescoagati/orgs",
"repos_url": "https://api.github.com/users/francescoagati/repos",
"events_url": "https://api.github.com/users/francescoagati/events{/privacy}",
"received_events_url": "https://api.github.com/users/francescoagati/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2023-11-02T10:00:29
| 2023-11-04T20:34:03
| 2023-11-04T18:55:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
hello,
i have notice a big change with last release.
many models in a simple task of summarize become crazy and generate or random words or enter in an infinite loop.
i have do rollback to an old version of ollama
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/970/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/970/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4264
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4264/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4264/comments
|
https://api.github.com/repos/ollama/ollama/issues/4264/events
|
https://github.com/ollama/ollama/pull/4264
| 2,286,303,700
|
PR_kwDOJ0Z1Ps5u6wk3
| 4,264
|
Centralize GPU configuration vars
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-08T19:35:08
| 2024-06-15T14:33:56
| 2024-06-15T14:33:52
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4264",
"html_url": "https://github.com/ollama/ollama/pull/4264",
"diff_url": "https://github.com/ollama/ollama/pull/4264.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4264.patch",
"merged_at": "2024-06-15T14:33:52"
}
|
This should aid in troubleshooting by capturing and reporting the GPU settings at startup in the logs along with all the other server settings.
Fixes #4139
Example output setting the ROCm gfx override:
```
2024/05/08 19:33:27 routes.go:993: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:10.3.0 OLLAMA_DEBUG:true OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4264/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/190
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/190/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/190/comments
|
https://api.github.com/repos/ollama/ollama/issues/190/events
|
https://github.com/ollama/ollama/issues/190
| 1,818,491,695
|
I_kwDOJ0Z1Ps5sY_sv
| 190
|
brew formula
|
{
"login": "ryanmerolle",
"id": 9010275,
"node_id": "MDQ6VXNlcjkwMTAyNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9010275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryanmerolle",
"html_url": "https://github.com/ryanmerolle",
"followers_url": "https://api.github.com/users/ryanmerolle/followers",
"following_url": "https://api.github.com/users/ryanmerolle/following{/other_user}",
"gists_url": "https://api.github.com/users/ryanmerolle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryanmerolle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryanmerolle/subscriptions",
"organizations_url": "https://api.github.com/users/ryanmerolle/orgs",
"repos_url": "https://api.github.com/users/ryanmerolle/repos",
"events_url": "https://api.github.com/users/ryanmerolle/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryanmerolle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-07-24T13:59:35
| 2023-08-30T21:28:48
| 2023-08-30T21:28:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
A brew formula would be super helpful. Thanks for all your work here!
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/190/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/190/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1450
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1450/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1450/comments
|
https://api.github.com/repos/ollama/ollama/issues/1450/events
|
https://github.com/ollama/ollama/issues/1450
| 2,034,192,106
|
I_kwDOJ0Z1Ps55P07q
| 1,450
|
Use hard link to import GGUF on the same host to save disk space
|
{
"login": "xleven",
"id": 10850975,
"node_id": "MDQ6VXNlcjEwODUwOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/10850975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xleven",
"html_url": "https://github.com/xleven",
"followers_url": "https://api.github.com/users/xleven/followers",
"following_url": "https://api.github.com/users/xleven/following{/other_user}",
"gists_url": "https://api.github.com/users/xleven/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xleven/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xleven/subscriptions",
"organizations_url": "https://api.github.com/users/xleven/orgs",
"repos_url": "https://api.github.com/users/xleven/repos",
"events_url": "https://api.github.com/users/xleven/events{/privacy}",
"received_events_url": "https://api.github.com/users/xleven/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-10T02:35:18
| 2023-12-11T17:32:57
| 2023-12-11T17:32:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If I understand it correctly, first step of a GGUF import is copying the binary to model dir with a hashed name. When the number of models (mainly GGUF) grows, duplicated binaries may take a lot of disk space.
Thinking hard links, or the raw GGUFs if possible, will do the work of space saving, though it only makes sense when client & server are on the same host and GGUF & model dir are on the same disk.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1450/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/7709
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7709/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7709/comments
|
https://api.github.com/repos/ollama/ollama/issues/7709/events
|
https://github.com/ollama/ollama/pull/7709
| 2,666,319,423
|
PR_kwDOJ0Z1Ps6CKpZp
| 7,709
|
docs: add customization section in linux.md
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-17T19:00:58
| 2024-11-17T19:48:14
| 2024-11-17T19:48:12
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7709",
"html_url": "https://github.com/ollama/ollama/pull/7709",
"diff_url": "https://github.com/ollama/ollama/pull/7709.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7709.patch",
"merged_at": "2024-11-17T19:48:12"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7709/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1192
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1192/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1192/comments
|
https://api.github.com/repos/ollama/ollama/issues/1192/events
|
https://github.com/ollama/ollama/pull/1192
| 2,000,612,686
|
PR_kwDOJ0Z1Ps5f1Aqq
| 1,192
|
main_gpu argument is not getting set for llamacpp
|
{
"login": "purinda",
"id": 3181510,
"node_id": "MDQ6VXNlcjMxODE1MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3181510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/purinda",
"html_url": "https://github.com/purinda",
"followers_url": "https://api.github.com/users/purinda/followers",
"following_url": "https://api.github.com/users/purinda/following{/other_user}",
"gists_url": "https://api.github.com/users/purinda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/purinda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/purinda/subscriptions",
"organizations_url": "https://api.github.com/users/purinda/orgs",
"repos_url": "https://api.github.com/users/purinda/repos",
"events_url": "https://api.github.com/users/purinda/events{/privacy}",
"received_events_url": "https://api.github.com/users/purinda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-18T23:06:12
| 2023-11-21T13:05:44
| 2023-11-20T15:52:52
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1192",
"html_url": "https://github.com/ollama/ollama/pull/1192",
"diff_url": "https://github.com/ollama/ollama/pull/1192.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1192.patch",
"merged_at": "2023-11-20T15:52:52"
}
|
In a multi-GPU platform I observed I cannot set the main GPU to be used to llamacpp though llamacpp itself support this through `main_gpu` argument.
This PR fixes just that.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1192/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1188
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1188/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1188/comments
|
https://api.github.com/repos/ollama/ollama/issues/1188/events
|
https://github.com/ollama/ollama/issues/1188
| 2,000,244,238
|
I_kwDOJ0Z1Ps53OU4O
| 1,188
|
Enhancement Request: Network-Distributed Inference(NDI) and Intuitive Resource Sharing
|
{
"login": "repollo",
"id": 2671466,
"node_id": "MDQ6VXNlcjI2NzE0NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2671466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/repollo",
"html_url": "https://github.com/repollo",
"followers_url": "https://api.github.com/users/repollo/followers",
"following_url": "https://api.github.com/users/repollo/following{/other_user}",
"gists_url": "https://api.github.com/users/repollo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/repollo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/repollo/subscriptions",
"organizations_url": "https://api.github.com/users/repollo/orgs",
"repos_url": "https://api.github.com/users/repollo/repos",
"events_url": "https://api.github.com/users/repollo/events{/privacy}",
"received_events_url": "https://api.github.com/users/repollo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2023-11-18T05:27:53
| 2024-03-11T18:04:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am proposing an enhancement for the Ollama project that I believe would significantly benefit all users, especially those with an interest in distributed computing and AI.
**Proposed Enhancements:**
1. **Network Distribution Toggle:** I want to implement a toggle option in the system tray menu labeled "Network Distribution." This feature would enable users to opt-in to using network-distributed models for inference tasks. With user consent, when their computer is idle, it would contribute processing power to our network pool.
2. **Preferences Menu:** The "Preferences" option within the tray menu will lead to an intuitive interface. This interface will allow users to configure Ollama settings and select which distributed models they wish to support or use for inference. In the future, it could also include options for contributing to fine-tuning processes through distributed computing efforts.
3. **User Experience & Accessibility:** The Preferences interface will be designed to be intuitive for non-technical users, ensuring they can easily configure settings and understand which models they are contributing to and the benefits of doing so.
**Visual Example:**
<img width="212" alt="Screenshot 2023-11-18 at 1 08 47 AM" src="https://github.com/jmorganca/ollama/assets/2671466/5e374434-2f9c-4837-96cd-af3cedb83a66">
In the image, the current system tray menu includes "Network Distribution" and "Preferences" options.
**Current Progress:**
So far, the modifications have been focused on the user interface, without any backend changes. This has been done to demonstrate how the changes are unintrusive to the current UI. Here's a summary of the changes made to `app/src/index.ts`:
- Implemented `updateTrayTooltip` to dynamically update the tray tooltip reflecting the network distribution status.
- Added `toggleNetworkDistribution` to handle the toggling of network distribution feature.
- Updated `updateTray` to build a tray menu that now includes a "Preferences" option and a "Network Distribution" toggle.
**Code Snippet:**
```typescript
// ... existing imports and code
function updateTrayTooltip() {
let tooltipText = 'Ollama';
if (isNetworkDistributed) {
tooltipText += ' - Network Distribution is ON';
} else {
tooltipText += ' - Network Distribution is OFF';
}
if (tray) {
tray.setToolTip(tooltipText);
}
}
let isNetworkDistributed: boolean = store.get('isNetworkDistributed', false) as boolean;
function toggleNetworkDistribution() {
isNetworkDistributed = !isNetworkDistributed;
// store.set('isNetworkDistributed', isNetworkDistributed);
// updateTray(); // Call this to update the tray icon and menu
// Call updateTrayTooltip() whenever you need to update the tooltip,
// for example, after toggling the network distribution state:
updateTrayTooltip(); // Update the tooltip text
// Additional code to handle the activation/deactivation of network resource pooling
if (isNetworkDistributed) {
// Code to handle when network distribution is activated
console.log('Network distribution is enabled.');
} else {
// Code to handle when network distribution is deactivated
console.log('Network distribution is disabled.');
}
}
// Add the network distribution toggle to your menu template
const networkDistributionToggle: MenuItemConstructorOptions = {
label: 'Network Distribution',
type: 'checkbox',
checked: isNetworkDistributed,
click: toggleNetworkDistribution,
};
function updateTray() {
const updateItems: MenuItemConstructorOptions[] = [
{ label: 'An update is available', enabled: false },
{
label: 'Restart to update',
click: () => autoUpdater.quitAndInstall(),
},
{ type: 'separator' },
]
const menu = Menu.buildFromTemplate([
...(updateAvailable ? updateItems : []),
networkDistributionToggle, // Include the network distribution toggle here
{ role: 'quit', label: 'Preferences', accelerator: 'Command+,', click: () => firstRunWindow() },
{ role: 'quit', label: 'Quit Ollama', accelerator: 'Command+Q' },
])
if (!tray) {
tray = new Tray(trayIconPath())
}
tray.setToolTip(updateAvailable ? 'An update is available' : 'Ollama')
tray.setContextMenu(menu)
tray.setImage(trayIconPath())
nativeTheme.off('updated', updateTrayIcon)
nativeTheme.on('updated', updateTrayIcon)
}
// ... the rest of the code
```
**Why This Matters:**
Implementing these features would advance Ollama's capabilities and encourage greater community involvement in AI development. It supports a vision of making AI more accessible and collaborative.
I would greatly appreciate the community's support, feedback, and contributions to realize this vision. Your expertise in Electron, UI/UX design, or network programming would be invaluable.
Thank you for considering this enhancement. I am eager to discuss further and work together on making this a reality for Ollama.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1188/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1188/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2801
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2801/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2801/comments
|
https://api.github.com/repos/ollama/ollama/issues/2801/events
|
https://github.com/ollama/ollama/issues/2801
| 2,158,182,325
|
I_kwDOJ0Z1Ps6Aoz-1
| 2,801
|
Port should be changeable
|
{
"login": "pankajkumar229",
"id": 1482916,
"node_id": "MDQ6VXNlcjE0ODI5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1482916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pankajkumar229",
"html_url": "https://github.com/pankajkumar229",
"followers_url": "https://api.github.com/users/pankajkumar229/followers",
"following_url": "https://api.github.com/users/pankajkumar229/following{/other_user}",
"gists_url": "https://api.github.com/users/pankajkumar229/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pankajkumar229/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pankajkumar229/subscriptions",
"organizations_url": "https://api.github.com/users/pankajkumar229/orgs",
"repos_url": "https://api.github.com/users/pankajkumar229/repos",
"events_url": "https://api.github.com/users/pankajkumar229/events{/privacy}",
"received_events_url": "https://api.github.com/users/pankajkumar229/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-02-28T05:59:53
| 2024-03-04T05:24:45
| 2024-03-01T01:36:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I could not change the port Ollama listens on. I hope we can run multiple instances on different ports.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2801/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4721
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4721/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4721/comments
|
https://api.github.com/repos/ollama/ollama/issues/4721/events
|
https://github.com/ollama/ollama/pull/4721
| 2,325,585,662
|
PR_kwDOJ0Z1Ps5xAcOr
| 4,721
|
Add LoongArch64 ISA Support
|
{
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com/users/HougeLangley/followers",
"following_url": "https://api.github.com/users/HougeLangley/following{/other_user}",
"gists_url": "https://api.github.com/users/HougeLangley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HougeLangley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HougeLangley/subscriptions",
"organizations_url": "https://api.github.com/users/HougeLangley/orgs",
"repos_url": "https://api.github.com/users/HougeLangley/repos",
"events_url": "https://api.github.com/users/HougeLangley/events{/privacy}",
"received_events_url": "https://api.github.com/users/HougeLangley/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-30T12:40:53
| 2024-06-15T17:18:47
| 2024-06-15T17:18:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4721",
"html_url": "https://github.com/ollama/ollama/pull/4721",
"diff_url": "https://github.com/ollama/ollama/pull/4721.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4721.patch",
"merged_at": null
}
|
1. go.mod: replace github.com/chewxy/math32 v1.10.1 to github.com/chewxy/math32 v1.10.2-0.20240509203351, fixed https://github.com/chewxy/math32/issues/23 ;
2. go.sum;
3. llm.go add loong64 support;
4. gen_common.sh add 64bit LoongArch support;
5. gen_linux.sh add loongarch64 ISA LASX/LSX Support.
6. fix https://github.com/ollama/ollama/issues/4552
|
{
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com/users/HougeLangley/followers",
"following_url": "https://api.github.com/users/HougeLangley/following{/other_user}",
"gists_url": "https://api.github.com/users/HougeLangley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HougeLangley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HougeLangley/subscriptions",
"organizations_url": "https://api.github.com/users/HougeLangley/orgs",
"repos_url": "https://api.github.com/users/HougeLangley/repos",
"events_url": "https://api.github.com/users/HougeLangley/events{/privacy}",
"received_events_url": "https://api.github.com/users/HougeLangley/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4721/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3453
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3453/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3453/comments
|
https://api.github.com/repos/ollama/ollama/issues/3453/events
|
https://github.com/ollama/ollama/issues/3453
| 2,220,053,399
|
I_kwDOJ0Z1Ps6EU1OX
| 3,453
|
Some ollama cli instructions: specially stop
|
{
"login": "ejgutierrez74",
"id": 11474846,
"node_id": "MDQ6VXNlcjExNDc0ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11474846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ejgutierrez74",
"html_url": "https://github.com/ejgutierrez74",
"followers_url": "https://api.github.com/users/ejgutierrez74/followers",
"following_url": "https://api.github.com/users/ejgutierrez74/following{/other_user}",
"gists_url": "https://api.github.com/users/ejgutierrez74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ejgutierrez74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ejgutierrez74/subscriptions",
"organizations_url": "https://api.github.com/users/ejgutierrez74/orgs",
"repos_url": "https://api.github.com/users/ejgutierrez74/repos",
"events_url": "https://api.github.com/users/ejgutierrez74/events{/privacy}",
"received_events_url": "https://api.github.com/users/ejgutierrez74/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-04-02T09:48:19
| 2024-09-02T19:36:38
| 2024-09-01T23:55:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
- Stop
- Restart
- Probably change OLLAMA_MODELS, OLLAMA_HOST AND OLLAMA_PORT
Also would be nice to change ollama serve to ollama start for reasoning of other services or similar things.
### How should we solve this?
- Creating a CLI instructions to make it possible
ex: $ ollama stop serve
$ollama setdirectory /media/eduardo/ollama_models
$ollama sethost 127.52.56.63
$ollama restart serve
### What is the impact of not solving this?
Improve usability and give more tools to users/developers.
### Anything else?
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3453/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3453/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3609
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3609/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3609/comments
|
https://api.github.com/repos/ollama/ollama/issues/3609/events
|
https://github.com/ollama/ollama/issues/3609
| 2,238,838,325
|
I_kwDOJ0Z1Ps6FcfY1
| 3,609
|
Issue Storage Filling up need help! (Ubuntu server 22.04)
|
{
"login": "alfi4000",
"id": 149228038,
"node_id": "U_kgDOCOUKBg",
"avatar_url": "https://avatars.githubusercontent.com/u/149228038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alfi4000",
"html_url": "https://github.com/alfi4000",
"followers_url": "https://api.github.com/users/alfi4000/followers",
"following_url": "https://api.github.com/users/alfi4000/following{/other_user}",
"gists_url": "https://api.github.com/users/alfi4000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alfi4000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alfi4000/subscriptions",
"organizations_url": "https://api.github.com/users/alfi4000/orgs",
"repos_url": "https://api.github.com/users/alfi4000/repos",
"events_url": "https://api.github.com/users/alfi4000/events{/privacy}",
"received_events_url": "https://api.github.com/users/alfi4000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-04-12T00:55:53
| 2024-04-22T23:48:19
| 2024-04-22T23:48:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I run Ollama my /dev/mapper directory is filling up! Check the image the marked storage is filling up how can I solve it to not happen?:
<img width="552" alt="Bildschirmfoto 2024-04-11 um 17 50 08" src="https://github.com/ollama/ollama/assets/166188813/19a72be1-4179-4906-a0e8-ebdd6135a9e3">
This is command I am using to run Ollama: OLLAMA_HOST=192.168.50.53:11435 ollama serve &
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
x86
### Platform
WSL, WSL2
### Ollama version
0.1.31
### GPU
Nvidia
### GPU info
_No response_
### CPU
Intel
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3609/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6246
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6246/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6246/comments
|
https://api.github.com/repos/ollama/ollama/issues/6246/events
|
https://github.com/ollama/ollama/issues/6246
| 2,454,514,541
|
I_kwDOJ0Z1Ps6STOtt
| 6,246
|
Modelfile - Customize a prompt
|
{
"login": "LucasFreitas88",
"id": 177795987,
"node_id": "U_kgDOCpjzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/177795987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LucasFreitas88",
"html_url": "https://github.com/LucasFreitas88",
"followers_url": "https://api.github.com/users/LucasFreitas88/followers",
"following_url": "https://api.github.com/users/LucasFreitas88/following{/other_user}",
"gists_url": "https://api.github.com/users/LucasFreitas88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LucasFreitas88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LucasFreitas88/subscriptions",
"organizations_url": "https://api.github.com/users/LucasFreitas88/orgs",
"repos_url": "https://api.github.com/users/LucasFreitas88/repos",
"events_url": "https://api.github.com/users/LucasFreitas88/events{/privacy}",
"received_events_url": "https://api.github.com/users/LucasFreitas88/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 11
| 2024-08-07T23:33:35
| 2024-08-08T18:28:10
| 2024-08-08T18:28:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I cannot change the modelfile file with new prompt instructions, as in the documentation example (Customize a prompt item).
Model: Llama 3.1 8B
Notebook: Macbook Air M1 - Mac Os Sonoma 14.6.1
The answer to the question posed in the example (hi) is an endless sequence of strange characters like:
H*114@(02228'6.:@6@?/B+:&H((,/1A:8/=>=<.C-.C2>9C*;H!$C=+5&,'*&C7@44D>&BC=D"C%6<BB%;;$*//2<D814),';?:@!!9:2H*114@(02228'6.:@6@?/B+:&H((,/1A:8/=>=<.-C-.C2>9C*;H!$C=+5&,'*&C7@44D>&BC=D"C%6< ...
What is happening? The procedure I used is exactly the same as described in the documentation.
### OS
macOS
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.3.4
|
{
"login": "LucasFreitas88",
"id": 177795987,
"node_id": "U_kgDOCpjzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/177795987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LucasFreitas88",
"html_url": "https://github.com/LucasFreitas88",
"followers_url": "https://api.github.com/users/LucasFreitas88/followers",
"following_url": "https://api.github.com/users/LucasFreitas88/following{/other_user}",
"gists_url": "https://api.github.com/users/LucasFreitas88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LucasFreitas88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LucasFreitas88/subscriptions",
"organizations_url": "https://api.github.com/users/LucasFreitas88/orgs",
"repos_url": "https://api.github.com/users/LucasFreitas88/repos",
"events_url": "https://api.github.com/users/LucasFreitas88/events{/privacy}",
"received_events_url": "https://api.github.com/users/LucasFreitas88/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6246/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6245
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6245/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6245/comments
|
https://api.github.com/repos/ollama/ollama/issues/6245/events
|
https://github.com/ollama/ollama/issues/6245
| 2,454,480,087
|
I_kwDOJ0Z1Ps6STGTX
| 6,245
|
A character gets skipped here and there in the output, using any model, over any tunnel
|
{
"login": "embium",
"id": 82550035,
"node_id": "MDQ6VXNlcjgyNTUwMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/82550035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/embium",
"html_url": "https://github.com/embium",
"followers_url": "https://api.github.com/users/embium/followers",
"following_url": "https://api.github.com/users/embium/following{/other_user}",
"gists_url": "https://api.github.com/users/embium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/embium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/embium/subscriptions",
"organizations_url": "https://api.github.com/users/embium/orgs",
"repos_url": "https://api.github.com/users/embium/repos",
"events_url": "https://api.github.com/users/embium/events{/privacy}",
"received_events_url": "https://api.github.com/users/embium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-08-07T22:51:50
| 2024-08-07T22:58:10
| 2024-08-07T22:58:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello,
Here is the output:
```
**The Art of Ball Handling: Mastering Control on and off the Court**
When it comes to various sports, mastering ball handling is a aspect that separates the good players from the great ones. Whether you're playing basketball, soccer, or tennis, controlling the ball with precision finesse can make all the difference in your game.
**The Fundamentals of Ball**
Ball handling begins with understanding the basics.'s not just about throwing the ball from one hand to another; it's about developing muscle memory and coordination that allows you to control the with ease. To improve your ball handling, practice basic drills such as:
* **Stationary Dribbling:** Start by standing and dribbling the ball with both hands. Focus on keeping your head up, staying low, and using different parts of your hand to the ball.
* **Figure-Eights:** Draw a figure-eight pattern on the ground with your non-dominant foot while keeping ball close to your body. Switch directions and repeat with your dominant foot.
**Advanced Ball Handling Techniques**
As you become more comfortable with drills, it's time to move on to advanced techniques that will take your game to the next level:
* **Behind-the- Dribble:** This technique involves dribbling the ball behind your back while keeping it close to your body. Start by standing still and move into a crouch position.
* **Between-the-Legs Dribble:** Similar to the behind-the-back dribble, technique involves dribbling the ball between your legs while keeping it close to your body.
**Tips for Improving Ball Handling**
Mastering handling takes time, patience, and dedication. Here are some additional tips to help you improve:
* **Practice Regularly:** Cons is key when it comes to improving ball handling. Aim to practice at least 30 minutes a day.
* **Stay Relaxed Tension in your arms and hands can lead to sloppy ball handling. Practice relaxing while keeping control of the ball.
**Ball Handling for Sports**
While the fundamentals of ball handling remain the same across various sports, there are some specific techniques that apply to each game:
* **Basketball:** In basketball, ball handling is critical for creating scoring opportunities and breaking down defenses.
* **Soccer:** Ball in soccer involves using different parts of your foot to trap, dribble, and pass the ball.
**Conclusion**
Mastering ball handling time and practice, but with dedication and patience, you can develop the skills needed to dominate on the court. Whether you're playing basketball soccer, or tennis, remember to stay relaxed, focus on proper technique, and practice regularly.
---
### **Additional Resources:**
[Basketball Ball Handling Drills](https://www.basketball-drills.com/ball-handling/)
* [S Ball Control Tips](https://www.soccer-training.org/soccer-ball-control-tips/)
### **Books on Ball Handling:**
"The Art of Dribbling" by Kevin Durant
* "Ball Control for Soccer" by Pep Guardiola
NoteKeep in mind that these books and resources are subject to change, so be sure to verify the information before using them.
By following the and techniques outlined above, you can improve your ball handling skills and take your game to the next level. Remember to practice regularly, stay, and focus on proper technique. Happy practicing!
```
As you can see there are a couple of errors here, for instance
> **Stay Relaxed
is missing an extra ** at the end.
Everything seem to work fine over local (http://localhost:11434) but once I try it over a Cloudlare tunnel it seems to skip characters here and there.
Why could this be, is there a possible fix?
Thanks
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.4
|
{
"login": "embium",
"id": 82550035,
"node_id": "MDQ6VXNlcjgyNTUwMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/82550035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/embium",
"html_url": "https://github.com/embium",
"followers_url": "https://api.github.com/users/embium/followers",
"following_url": "https://api.github.com/users/embium/following{/other_user}",
"gists_url": "https://api.github.com/users/embium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/embium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/embium/subscriptions",
"organizations_url": "https://api.github.com/users/embium/orgs",
"repos_url": "https://api.github.com/users/embium/repos",
"events_url": "https://api.github.com/users/embium/events{/privacy}",
"received_events_url": "https://api.github.com/users/embium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6245/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1199
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1199/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1199/comments
|
https://api.github.com/repos/ollama/ollama/issues/1199/events
|
https://github.com/ollama/ollama/pull/1199
| 2,000,995,719
|
PR_kwDOJ0Z1Ps5f2MVM
| 1,199
|
Fix issues sending incomplete body and add retry backoff for `ollama push`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-19T19:12:20
| 2023-11-19T19:32:20
| 2023-11-19T19:32:19
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1199",
"html_url": "https://github.com/ollama/ollama/pull/1199",
"diff_url": "https://github.com/ollama/ollama/pull/1199.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1199.patch",
"merged_at": "2023-11-19T19:32:19"
}
|
Builds on #1184
This change increases the upload chunk sizes and adds more graceful retry backoffs to fix issues transient network issues when using `ollama push`.
It also fixes an issue where an incomplete body would be uploaded, requiring the need for a retry.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1199/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7556
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7556/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7556/comments
|
https://api.github.com/repos/ollama/ollama/issues/7556/events
|
https://github.com/ollama/ollama/issues/7556
| 2,640,935,358
|
I_kwDOJ0Z1Ps6daXm-
| 7,556
|
llama runner process has terminated: error loading model: unable to allocate backend buffer when AMD iGPU vram allocation larger than 8GB
|
{
"login": "oatmealm",
"id": 68159077,
"node_id": "MDQ6VXNlcjY4MTU5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/68159077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oatmealm",
"html_url": "https://github.com/oatmealm",
"followers_url": "https://api.github.com/users/oatmealm/followers",
"following_url": "https://api.github.com/users/oatmealm/following{/other_user}",
"gists_url": "https://api.github.com/users/oatmealm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oatmealm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oatmealm/subscriptions",
"organizations_url": "https://api.github.com/users/oatmealm/orgs",
"repos_url": "https://api.github.com/users/oatmealm/repos",
"events_url": "https://api.github.com/users/oatmealm/events{/privacy}",
"received_events_url": "https://api.github.com/users/oatmealm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
open
| false
| null |
[] | null | 3
| 2024-11-07T12:52:44
| 2024-11-07T21:50:49
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After setting iGPU allocation to 16GB (out of 32GB) some models crash when loaded, while other mange.
```
ollama run llama3.2
Error: llama runner process has terminated: cudaMalloc failed: out of memory
llama_kv_cache_init: failed to allocate buffer for kv cache
llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache
```
```
ollama run llama3.2:3b-instruct-q6_K
Error: llama runner process has terminated: error loading model: unable to allocate backend buffer
llama_load_model_from_file: exception loading model
```
```
ollama run smollm2:1.7b-instruct-q6_K
>>> Send a message (/? for help)
```
With a smaller ram/vram split, like 4G, ollama loads models into vram fully, or gpu and cpu.
```
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=9.0.0"
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
Environment="OLLAMA_KEEP_ALIVE=24h"
```
```
rocminfo
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
Runtime Ext Version: 1.6
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen 9 5900HX with Radeon Graphics
Uuid: CPU-XX
Marketing Name: AMD Ryzen 9 5900HX with Radeon Graphics
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 4680
BDFID: 0
Internal Node ID: 0
Compute Unit: 16
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Memory Properties:
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 16285796(0xf88064) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 16285796(0xf88064) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 16285796(0xf88064) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: gfx90c
Uuid: GPU-XX
Marketing Name: AMD Radeon Graphics
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 16(0x10) KB
L2: 1024(0x400) KB
Chip ID: 5688(0x1638)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2100
BDFID: 1024
Internal Node ID: 1
Compute Unit: 8
SIMDs per CU: 4
Shader Engines: 1
Shader Arrs. per Eng.: 1
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Memory Properties: APU
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 64(0x40)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 40(0x28)
Max Work-item Per CU: 2560(0xa00)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 472
SDMA engine uCode:: 40
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 8142896(0x7c4030) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 8142896(0x7c4030) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Recommended Granule:0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx90c:xnack+
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***
```
```
journalctl -u ollama.service -n 100 --no-pager
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 1: general.type str = model
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 3: general.finetune str = Instruct
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 4: general.basename str = Llama-3.2
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 5: general.size_label str = 3B
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 8: llama.block_count u32 = 28
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 9: llama.context_length u32 = 131072
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 10: llama.embedding_length u32 = 3072
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 24
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 16: llama.attention.key_length u32 = 128
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 17: llama.attention.value_length u32 = 128
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 18: general.file_type u32 = 18
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 29: general.quantization_version u32 = 2
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - type f32: 58 tensors
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - type q6_K: 197 tensors
Nov 07 13:49:56 slimb ollama[1817]: time=2024-11-07T13:49:56.473+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Nov 07 13:49:56 slimb ollama[1817]: llm_load_vocab: special tokens cache size = 256
Nov 07 13:49:56 slimb ollama[1817]: llm_load_vocab: token to piece cache size = 0.7999 MB
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: format = GGUF V3 (latest)
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: arch = llama
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: vocab type = BPE
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_vocab = 128256
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_merges = 280147
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: vocab_only = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ctx_train = 131072
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd = 3072
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_layer = 28
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_head = 24
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_head_kv = 8
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_rot = 128
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_swa = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_head_k = 128
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_head_v = 128
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_gqa = 3
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_k_gqa = 1024
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_v_gqa = 1024
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_norm_eps = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_logit_scale = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ff = 8192
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_expert = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_expert_used = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: causal attn = 1
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: pooling type = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope type = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope scaling = linear
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: freq_base_train = 500000.0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: freq_scale_train = 1
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ctx_orig_yarn = 131072
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope_finetuned = unknown
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_conv = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_inner = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_state = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_dt_rank = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model type = 3B
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model ftype = Q6_K
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model params = 3.21 B
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model size = 2.45 GiB (6.56 BPW)
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: general.name = Llama 3.2 3B Instruct
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: LF token = 128 'Ä'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: max token length = 256
Nov 07 13:49:56 slimb ollama[1817]: /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: found 1 ROCm devices:
Nov 07 13:49:57 slimb ollama[1817]: Device 0: AMD Radeon Graphics, compute capability 9.0, VMM: no
Nov 07 13:49:57 slimb ollama[1817]: llm_load_tensors: ggml ctx size = 0.24 MiB
Nov 07 13:49:57 slimb ollama[1817]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2513.91 MiB on device 0: cudaMalloc failed: out of memory
Nov 07 13:49:57 slimb ollama[1817]: llama_model_load: error loading model: unable to allocate backend buffer
Nov 07 13:49:57 slimb ollama[1817]: llama_load_model_from_file: exception loading model
Nov 07 13:49:57 slimb ollama[1817]: terminate called after throwing an instance of 'std::runtime_error'
Nov 07 13:49:57 slimb ollama[1817]: what(): unable to allocate backend buffer
Nov 07 13:49:57 slimb ollama[1817]: time=2024-11-07T13:49:57.678+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
Nov 07 13:49:59 slimb ollama[1817]: time=2024-11-07T13:49:59.282+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate backend buffer\nllama_load_model_from_file: exception loading model"
Nov 07 13:49:59 slimb ollama[1817]: [GIN] 2024/11/07 - 13:49:59 | 500 | 3.106339582s | 127.0.0.1 | POST "/api/generate"
```
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.14
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7556/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1712
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1712/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1712/comments
|
https://api.github.com/repos/ollama/ollama/issues/1712/events
|
https://github.com/ollama/ollama/issues/1712
| 2,055,880,223
|
I_kwDOJ0Z1Ps56ij4f
| 1,712
|
Ollama version
|
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2023-12-25T21:45:40
| 2024-09-10T11:05:33
| 2023-12-26T23:02:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, maintainer of the Arch Linux [`ollama`](https://gitlab.archlinux.org/archlinux/packaging/packages/ollama/) package here.
`ollama --version` is "0.0.0" after building Ollama from source on Arch Linux. Is this intentional? Is there something this `PKGBUILD` is missing?
Thanks in advance.
```bash
pkgname=ollama
pkgdesc='Create, run and share large language models (LLMs)'
pkgver=0.1.17
pkgrel=2
arch=(x86_64)
url='https://github.com/jmorganca/ollama'
license=(MIT)
makedepends=(cmake git go setconf)
_ollamacommit=6b5bdfa6c9321405174ad443f21c2e41db36a867 # tag: v0.1.17
# The git submodule commit hashes are here:
# https://github.com/jmorganca/ollama/tree/v0.1.17/llm/llama.cpp
_ggmlcommit=9e232f0234073358e7031c1b8d7aa45020469a3b
_ggufcommit=a7aee47b98e45539d491071b25778b833b77e387
source=(git+$url#commit=$_ollamacommit
ggml::git+https://github.com/ggerganov/llama.cpp#commit=$_ggmlcommit
gguf::git+https://github.com/ggerganov/llama.cpp#commit=$_ggufcommit
sysusers.conf
tmpfiles.d
ollama.service)
b2sums=('SKIP'
'SKIP'
'SKIP'
'3aabf135c4f18e1ad745ae8800db782b25b15305dfeaaa031b4501408ab7e7d01f66e8ebb5be59fc813cfbff6788d08d2e48dcf24ecc480a40ec9db8dbce9fec'
'c890a741958d31375ebbd60eeeb29eff965a6e1e69f15eb17ea7d15b575a4abee176b7d407b3e1764aa7436862a764a05ad04bb9901a739ffd81968c09046bb6'
'a773bbf16cf5ccc2ee505ad77c3f9275346ddf412be283cfeaee7c2e4c41b8637a31aaff8766ed769524ebddc0c03cf924724452639b62208e578d98b9176124')
prepare() {
cd $pkgname
rm -frv llm/llama.cpp/gg{ml,uf}
# Copy git submodule files instead of symlinking because the build process is sensitive to symlinks.
cp -r "$srcdir/ggml" llm/llama.cpp/ggml
cp -r "$srcdir/gguf" llm/llama.cpp/gguf
# Do not git clone when "go generate" is being run.
sed -i 's,git submodule,true,g' llm/llama.cpp/generate_linux.go
# Do not build with CUDA, but turn LTO on
sed -i 's,LLAMA_CUBLAS=on,LLAMA_LTO=on,g' llm/llama.cpp/generate_linux.go
# Set build mode to release
sed -i '33s/DebugMode/ReleaseMode/;45s/DebugMode/ReleaseMode/' "$srcdir/ollama/server/routes.go"
}
build() {
cd $pkgname
export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS"
go generate ./...
go build -buildmode=pie -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external -ldflags=-buildid=''
}
check() {
cd ${pkgname/-cuda}
go test ./...
}
package() {
install -Dm755 $pkgname/$pkgname "$pkgdir/usr/bin/$pkgname"
install -dm700 "$pkgdir/var/lib/ollama"
install -Dm644 ollama.service "$pkgdir/usr/lib/systemd/system/ollama.service"
install -Dm644 sysusers.conf "$pkgdir/usr/lib/sysusers.d/ollama.conf"
install -Dm644 tmpfiles.d "$pkgdir/usr/lib/tmpfiles.d/ollama.conf"
install -Dm644 $pkgname/LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1712/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1712/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1066
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1066/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1066/comments
|
https://api.github.com/repos/ollama/ollama/issues/1066/events
|
https://github.com/ollama/ollama/issues/1066
| 1,986,658,571
|
I_kwDOJ0Z1Ps52agEL
| 1,066
|
Error: mkdir permission denied
|
{
"login": "pepsiamir",
"id": 22083243,
"node_id": "MDQ6VXNlcjIyMDgzMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/22083243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepsiamir",
"html_url": "https://github.com/pepsiamir",
"followers_url": "https://api.github.com/users/pepsiamir/followers",
"following_url": "https://api.github.com/users/pepsiamir/following{/other_user}",
"gists_url": "https://api.github.com/users/pepsiamir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pepsiamir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pepsiamir/subscriptions",
"organizations_url": "https://api.github.com/users/pepsiamir/orgs",
"repos_url": "https://api.github.com/users/pepsiamir/repos",
"events_url": "https://api.github.com/users/pepsiamir/events{/privacy}",
"received_events_url": "https://api.github.com/users/pepsiamir/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2023-11-10T01:07:08
| 2024-03-27T06:48:55
| 2023-11-16T00:41:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
After installing Mistral and Sqlcoder models I got this error.
```
verifying sha256 digest
writing manifest
Error: mkdir /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/mistral: permission denied
```
I had to make the directory manually which then succeeded.
```
sudo mkdir /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/mistral
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1066/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6136
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6136/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6136/comments
|
https://api.github.com/repos/ollama/ollama/issues/6136/events
|
https://github.com/ollama/ollama/pull/6136
| 2,444,014,175
|
PR_kwDOJ0Z1Ps53NI7o
| 6,136
|
docs: Update api.md
|
{
"login": "farwish",
"id": 6552412,
"node_id": "MDQ6VXNlcjY1NTI0MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6552412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farwish",
"html_url": "https://github.com/farwish",
"followers_url": "https://api.github.com/users/farwish/followers",
"following_url": "https://api.github.com/users/farwish/following{/other_user}",
"gists_url": "https://api.github.com/users/farwish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farwish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farwish/subscriptions",
"organizations_url": "https://api.github.com/users/farwish/orgs",
"repos_url": "https://api.github.com/users/farwish/repos",
"events_url": "https://api.github.com/users/farwish/events{/privacy}",
"received_events_url": "https://api.github.com/users/farwish/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-08-02T04:31:42
| 2024-11-21T10:16:21
| 2024-11-21T10:16:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6136",
"html_url": "https://github.com/ollama/ollama/pull/6136",
"diff_url": "https://github.com/ollama/ollama/pull/6136.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6136.patch",
"merged_at": null
}
|
Name is deprecated in api/types.go
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6136/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3191
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3191/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3191/comments
|
https://api.github.com/repos/ollama/ollama/issues/3191/events
|
https://github.com/ollama/ollama/issues/3191
| 2,190,489,213
|
I_kwDOJ0Z1Ps6CkDZ9
| 3,191
|
Error: pull model manifest: Get "https://ollama.com/token?nonce=6xXg08tJu5sXzjqrvWKxQA&scope=repository%!A(MISSING)library%!F(MISSING)llama2%!A(MISSING)pull&service=ollama.com&ts=1710652958": read tcp 192.168.5.215:60112->34.120.132.20:443: read: connection reset by peer
|
{
"login": "wbsxhh201771",
"id": 100500363,
"node_id": "U_kgDOBf2Diw",
"avatar_url": "https://avatars.githubusercontent.com/u/100500363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wbsxhh201771",
"html_url": "https://github.com/wbsxhh201771",
"followers_url": "https://api.github.com/users/wbsxhh201771/followers",
"following_url": "https://api.github.com/users/wbsxhh201771/following{/other_user}",
"gists_url": "https://api.github.com/users/wbsxhh201771/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wbsxhh201771/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wbsxhh201771/subscriptions",
"organizations_url": "https://api.github.com/users/wbsxhh201771/orgs",
"repos_url": "https://api.github.com/users/wbsxhh201771/repos",
"events_url": "https://api.github.com/users/wbsxhh201771/events{/privacy}",
"received_events_url": "https://api.github.com/users/wbsxhh201771/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-03-17T05:32:41
| 2024-03-29T00:01:48
| 2024-03-28T20:52:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
when i run :
`ollama run llama2`
I ment this problems:
`Error: pull model manifest: Get "https://ollama.com/token?nonce=6xXg08tJu5sXzjqrvWKxQA&scope=repository%!A(MISSING)library%!F(MISSING)llama2%!A(MISSING)pull&service=ollama.com&ts=1710652958": read tcp 192.168.5.215:60112->34.120.132.20:443: read: connection reset by peer.
`
i tried it on windows and unbuntu .also met this problems .and i close the vpn and firewall ,but it doesn't work.
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3191/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/4355
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4355/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4355/comments
|
https://api.github.com/repos/ollama/ollama/issues/4355/events
|
https://github.com/ollama/ollama/issues/4355
| 2,290,833,929
|
I_kwDOJ0Z1Ps6Ii1oJ
| 4,355
|
Ollama doesn' t work well with Zluda after 0.1.34
|
{
"login": "4thanks",
"id": 63891627,
"node_id": "MDQ6VXNlcjYzODkxNjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/63891627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4thanks",
"html_url": "https://github.com/4thanks",
"followers_url": "https://api.github.com/users/4thanks/followers",
"following_url": "https://api.github.com/users/4thanks/following{/other_user}",
"gists_url": "https://api.github.com/users/4thanks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/4thanks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/4thanks/subscriptions",
"organizations_url": "https://api.github.com/users/4thanks/orgs",
"repos_url": "https://api.github.com/users/4thanks/repos",
"events_url": "https://api.github.com/users/4thanks/events{/privacy}",
"received_events_url": "https://api.github.com/users/4thanks/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-05-11T10:48:29
| 2024-05-13T16:03:17
| 2024-05-13T16:03:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
when I was using ollama 0.1.32, it worked well with Zluda for my GPU (5700XT) follow the steps [ollama_windows_10_rx6600xt_zluda](https://www.reddit.com/r/ollama/comments/1cf5tq1/ollama_windows_10_rx6600xt_zluda/).
recently update to the newest version (0.1.37), the GPU isn' t being utilized anymore; try downgrade to 0.1.34, not work, to the 0.1.33 is ok.
update log
```
time=2024-05-13T23:45:14.969+08:00 level=INFO source=images.go:704 msg="total blobs: 20"
time=2024-05-13T23:45:14.970+08:00 level=INFO source=images.go:711 msg="total unused blobs removed: 0"
time=2024-05-13T23:45:14.971+08:00 level=INFO source=routes.go:1052 msg="Listening on 127.0.0.1:11434 (version 0.1.37)"
time=2024-05-13T23:45:14.971+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v5.7 cpu cpu_avx]"
time=2024-05-13T23:45:15.003+08:00 level=INFO source=gpu.go:197 msg="error looking up nvidia GPU memory" error="nvcuda failed to get primary device context 801"
time=2024-05-13T23:45:15.005+08:00 level=WARN source=amd_windows.go:95 msg="amdgpu is not supported" gpu=0 gpu_type=gfx1010:xnack- library="C:\\Program Files\\AMD\\ROCm\\5.7\\bin" supported_types="[gfx1010 gfx1011 gfx1012 gfx1030 gfx1031 gfx1100 gfx1101 gfx1102 gfx803 gfx900 gfx906]"
time=2024-05-13T23:45:15.005+08:00 level=WARN source=amd_windows.go:97 msg="See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for HSA_OVERRIDE_GFX_VERSION usage"
time=2024-05-13T23:45:15.005+08:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="31.8 GiB" available="20.9 GiB"
[GIN] 2024/05/13 - 23:45:48 | 200 | 2.5827ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/05/13 - 23:46:02 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/13 - 23:46:02 | 200 | 1.0369ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/13 - 23:46:02 | 200 | 518.2µs | 127.0.0.1 | POST "/api/show"
time=2024-05-13T23:46:02.341+08:00 level=INFO source=gpu.go:197 msg="error looking up nvidia GPU memory" error="nvcuda failed to get primary device context 801"
time=2024-05-13T23:46:02.343+08:00 level=WARN source=amd_windows.go:95 msg="amdgpu is not supported" gpu=0 gpu_type=gfx1010:xnack- library="C:\\Program Files\\AMD\\ROCm\\5.7\\bin" supported_types="[gfx1010 gfx1011 gfx1012 gfx1030 gfx1031 gfx1100 gfx1101 gfx1102 gfx803 gfx900 gfx906]"
time=2024-05-13T23:46:02.343+08:00 level=WARN source=amd_windows.go:97 msg="See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for HSA_OVERRIDE_GFX_VERSION usage"
time=2024-05-13T23:46:02.578+08:00 level=WARN source=server.go:207 msg="multimodal models don't support parallel requests yet"
```
ref: https://github.com/lshqqytiger/ZLUDA/issues/16 https://github.com/lshqqytiger/ZLUDA/issues/13#issuecomment-2085675119
### OS
Windows
### GPU
AMD
### CPU
Intel
### Ollama version
0.1.35
|
{
"login": "4thanks",
"id": 63891627,
"node_id": "MDQ6VXNlcjYzODkxNjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/63891627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4thanks",
"html_url": "https://github.com/4thanks",
"followers_url": "https://api.github.com/users/4thanks/followers",
"following_url": "https://api.github.com/users/4thanks/following{/other_user}",
"gists_url": "https://api.github.com/users/4thanks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/4thanks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/4thanks/subscriptions",
"organizations_url": "https://api.github.com/users/4thanks/orgs",
"repos_url": "https://api.github.com/users/4thanks/repos",
"events_url": "https://api.github.com/users/4thanks/events{/privacy}",
"received_events_url": "https://api.github.com/users/4thanks/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4355/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4518
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4518/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4518/comments
|
https://api.github.com/repos/ollama/ollama/issues/4518/events
|
https://github.com/ollama/ollama/issues/4518
| 2,304,356,057
|
I_kwDOJ0Z1Ps6JWa7Z
| 4,518
|
Add option to control start of response to generate api
|
{
"login": "notasquid1938",
"id": 99005612,
"node_id": "U_kgDOBea0rA",
"avatar_url": "https://avatars.githubusercontent.com/u/99005612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/notasquid1938",
"html_url": "https://github.com/notasquid1938",
"followers_url": "https://api.github.com/users/notasquid1938/followers",
"following_url": "https://api.github.com/users/notasquid1938/following{/other_user}",
"gists_url": "https://api.github.com/users/notasquid1938/gists{/gist_id}",
"starred_url": "https://api.github.com/users/notasquid1938/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/notasquid1938/subscriptions",
"organizations_url": "https://api.github.com/users/notasquid1938/orgs",
"repos_url": "https://api.github.com/users/notasquid1938/repos",
"events_url": "https://api.github.com/users/notasquid1938/events{/privacy}",
"received_events_url": "https://api.github.com/users/notasquid1938/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 0
| 2024-05-19T00:24:06
| 2024-11-06T17:29:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Using openwebui for ollama or textgenwebui. you can control what the model's response begins with to steer it in a certain direction. It would be very helpful to have this built-in to the api. I have struggled to recreate this effect with the api by trying to include the model's response portion of the template in my initial prompt but have been unsuccessful so far.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4518/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4518/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/798
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/798/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/798/comments
|
https://api.github.com/repos/ollama/ollama/issues/798/events
|
https://github.com/ollama/ollama/issues/798
| 1,944,769,217
|
I_kwDOJ0Z1Ps5z6tLB
| 798
|
JSON Marshal Escapes Special Characters in Prompts
|
{
"login": "deichbewohner",
"id": 54838329,
"node_id": "MDQ6VXNlcjU0ODM4MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/54838329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deichbewohner",
"html_url": "https://github.com/deichbewohner",
"followers_url": "https://api.github.com/users/deichbewohner/followers",
"following_url": "https://api.github.com/users/deichbewohner/following{/other_user}",
"gists_url": "https://api.github.com/users/deichbewohner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deichbewohner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deichbewohner/subscriptions",
"organizations_url": "https://api.github.com/users/deichbewohner/orgs",
"repos_url": "https://api.github.com/users/deichbewohner/repos",
"events_url": "https://api.github.com/users/deichbewohner/events{/privacy}",
"received_events_url": "https://api.github.com/users/deichbewohner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-16T09:25:02
| 2023-10-17T16:31:19
| 2023-10-17T16:31:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When using the `json.Marshal()` function in `llama.go`, I've noticed that special characters like `<` and `>` are being automatically escaped to `\u003c` and `\u003e`, respectively. This is problematic, especially for prompts that use these characters.
**Example:**
Consider the following prompt:
```
<|system|>
</s>
<|user|>
Hi</s>
<|assistant|>
```
**Location of the Issue:**
The line responsible for this behavior is located [here](https://github.com/jmorganca/ollama/blob/06bcfbd6295b0aa0b4a63b6bd6731c0995f0802d/llm/llama.go#L547).
```go
data, err := json.Marshal(predReq)
```
**Proposed Solution:**
I am planning to submit a pull request to address this issue, ensuring that special characters in templates remain unescaped when marshaled into JSON.
|
{
"login": "deichbewohner",
"id": 54838329,
"node_id": "MDQ6VXNlcjU0ODM4MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/54838329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deichbewohner",
"html_url": "https://github.com/deichbewohner",
"followers_url": "https://api.github.com/users/deichbewohner/followers",
"following_url": "https://api.github.com/users/deichbewohner/following{/other_user}",
"gists_url": "https://api.github.com/users/deichbewohner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deichbewohner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deichbewohner/subscriptions",
"organizations_url": "https://api.github.com/users/deichbewohner/orgs",
"repos_url": "https://api.github.com/users/deichbewohner/repos",
"events_url": "https://api.github.com/users/deichbewohner/events{/privacy}",
"received_events_url": "https://api.github.com/users/deichbewohner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/798/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4096
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4096/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4096/comments
|
https://api.github.com/repos/ollama/ollama/issues/4096/events
|
https://github.com/ollama/ollama/pull/4096
| 2,274,743,627
|
PR_kwDOJ0Z1Ps5uURzS
| 4,096
|
add _defaultApiClient in api/client.go for reuse
|
{
"login": "alwqx",
"id": 9915368,
"node_id": "MDQ6VXNlcjk5MTUzNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9915368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alwqx",
"html_url": "https://github.com/alwqx",
"followers_url": "https://api.github.com/users/alwqx/followers",
"following_url": "https://api.github.com/users/alwqx/following{/other_user}",
"gists_url": "https://api.github.com/users/alwqx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alwqx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alwqx/subscriptions",
"organizations_url": "https://api.github.com/users/alwqx/orgs",
"repos_url": "https://api.github.com/users/alwqx/repos",
"events_url": "https://api.github.com/users/alwqx/events{/privacy}",
"received_events_url": "https://api.github.com/users/alwqx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-05-02T06:43:01
| 2024-05-16T12:45:40
| 2024-05-10T00:19:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4096",
"html_url": "https://github.com/ollama/ollama/pull/4096",
"diff_url": "https://github.com/ollama/ollama/pull/4096.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4096.patch",
"merged_at": null
}
|
Hi, this pr mainly make following improvements:
1. add _defaultApiClient for reuse
I find that `api.ClientFromEnvironment()` is called more than one time by some functions in `cmd/cmd.go` (e.g. `RunHandler()` `generateInteractive`). So I add **_defaultApiClient** for reuse and to reduce memory alloc.
2. update tests and split `TestClientFromEnvironment` to `TestClientFromEnvironment` and `TestGetOllamaHost`
I hope these improvements are helpful. @jmorganca @dhiltgen Please feel free to provide feedback if you have any suggestions. And if you think this pr is not needed, I will close it.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4096/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8164
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8164/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8164/comments
|
https://api.github.com/repos/ollama/ollama/issues/8164/events
|
https://github.com/ollama/ollama/issues/8164
| 2,748,824,165
|
I_kwDOJ0Z1Ps6j17pl
| 8,164
|
llama3.2 3B "will fit in available VRAM" of a Nvidia 4060 TI but then runs on CPU. llm server error
|
{
"login": "felixniemeyer",
"id": 5720176,
"node_id": "MDQ6VXNlcjU3MjAxNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5720176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixniemeyer",
"html_url": "https://github.com/felixniemeyer",
"followers_url": "https://api.github.com/users/felixniemeyer/followers",
"following_url": "https://api.github.com/users/felixniemeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/felixniemeyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felixniemeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felixniemeyer/subscriptions",
"organizations_url": "https://api.github.com/users/felixniemeyer/orgs",
"repos_url": "https://api.github.com/users/felixniemeyer/repos",
"events_url": "https://api.github.com/users/felixniemeyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/felixniemeyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-12-18T21:57:41
| 2024-12-25T03:33:33
| 2024-12-18T22:00:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm trying to use llama3.2 on my Nvidia 4060 Ti 16GB but ollama runs it on the CPU.
Here is the server log with debug level logging.
```
2024/12/18 22:54:10 routes.go:1194: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/felix/davdev/ai/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-12-18T22:54:10.529+01:00 level=INFO source=images.go:753 msg="total blobs: 10"
time=2024-12-18T22:54:10.529+01:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-12-18T22:54:10.529+01:00 level=INFO source=routes.go:1245 msg="Listening on 127.0.0.1:11434 (version 0.5.1)"
time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=common.go:79 msg="runners located" dir=/usr/lib/ollama/runners
time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-18T22:54:10.536+01:00 level=INFO source=routes.go:1274 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2]"
time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=routes.go:1275 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-12-18T22:54:10.536+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2024-12-18T22:54:10.538+01:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-12-18T22:54:10.538+01:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so*
time=2024-12-18T22:54:10.538+01:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/lib/ollama/libcuda.so* /home/felix/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-12-18T22:54:10.569+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[/usr/lib/libcuda.so.550.135 /usr/lib64/libcuda.so.550.135]"
initializing /usr/lib/libcuda.so.550.135
dlsym: cuInit - 0x7f884667cc90
dlsym: cuDriverGetVersion - 0x7f884667ccb0
dlsym: cuDeviceGetCount - 0x7f884667ccf0
dlsym: cuDeviceGet - 0x7f884667ccd0
dlsym: cuDeviceGetAttribute - 0x7f884667cdd0
dlsym: cuDeviceGetUuid - 0x7f884667cd30
dlsym: cuDeviceGetName - 0x7f884667cd10
dlsym: cuCtxCreate_v3 - 0x7f884667cfb0
dlsym: cuMemGetInfo_v2 - 0x7f8846686ef0
dlsym: cuCtxDestroy - 0x7f88466e18d0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-18T22:54:10.631+01:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/usr/lib/libcuda.so.550.135
[GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d] CUDA totalMem 16073 mb
[GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d] CUDA freeMem 15886 mb
[GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d] Compute Capability 8.9
time=2024-12-18T22:54:10.720+01:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-12-18T22:54:10.720+01:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="15.7 GiB" available="15.5 GiB"
[GIN] 2024/12/18 - 22:54:22 | 200 | 33.213µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/12/18 - 22:54:22 | 200 | 17.212787ms | 127.0.0.1 | POST "/api/show"
time=2024-12-18T22:54:22.318+01:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="31.3 GiB" before.free="30.2 GiB" before.free_swap="34.5 GiB" now.total="31.3 GiB" now.free="30.1 GiB" now.free_swap="34.5 GiB"
initializing /usr/lib/libcuda.so.550.135
dlsym: cuInit - 0x7f884667cc90
dlsym: cuDriverGetVersion - 0x7f884667ccb0
dlsym: cuDeviceGetCount - 0x7f884667ccf0
dlsym: cuDeviceGet - 0x7f884667ccd0
dlsym: cuDeviceGetAttribute - 0x7f884667cdd0
dlsym: cuDeviceGetUuid - 0x7f884667cd30
dlsym: cuDeviceGetName - 0x7f884667cd10
dlsym: cuCtxCreate_v3 - 0x7f884667cfb0
dlsym: cuMemGetInfo_v2 - 0x7f8846686ef0
dlsym: cuCtxDestroy - 0x7f88466e18d0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-18T22:54:22.405+01:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.5 GiB" now.total="15.7 GiB" now.free="15.5 GiB" now.used="186.7 MiB"
releasing cuda driver library
time=2024-12-18T22:54:22.405+01:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x562e5f053520 gpu_count=1
time=2024-12-18T22:54:22.436+01:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
time=2024-12-18T22:54:22.436+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[15.5 GiB]"
time=2024-12-18T22:54:22.436+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d parallel=4 available=16658268160 required="3.7 GiB"
time=2024-12-18T22:54:22.436+01:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="31.3 GiB" before.free="30.1 GiB" before.free_swap="34.5 GiB" now.total="31.3 GiB" now.free="30.1 GiB" now.free_swap="34.5 GiB"
initializing /usr/lib/libcuda.so.550.135
dlsym: cuInit - 0x7f884667cc90
dlsym: cuDriverGetVersion - 0x7f884667ccb0
dlsym: cuDeviceGetCount - 0x7f884667ccf0
dlsym: cuDeviceGet - 0x7f884667ccd0
dlsym: cuDeviceGetAttribute - 0x7f884667cdd0
dlsym: cuDeviceGetUuid - 0x7f884667cd30
dlsym: cuDeviceGetName - 0x7f884667cd10
dlsym: cuCtxCreate_v3 - 0x7f884667cfb0
dlsym: cuMemGetInfo_v2 - 0x7f8846686ef0
dlsym: cuCtxDestroy - 0x7f88466e18d0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.5 GiB" now.total="15.7 GiB" now.free="15.5 GiB" now.used="186.7 MiB"
releasing cuda driver library
time=2024-12-18T22:54:22.513+01:00 level=INFO source=server.go:104 msg="system memory" total="31.3 GiB" free="30.1 GiB" free_swap="34.5 GiB"
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[15.5 GiB]"
time=2024-12-18T22:54:22.513+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB"
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-18T22:54:22.514+01:00 level=DEBUG source=gpu.go:714 msg="no filter required for library cpu"
time=2024-12-18T22:54:22.515+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 8 --parallel 4 --port 41657"
time=2024-12-18T22:54:22.515+01:00 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/opt/resolve/bin:/home/felix/scripts:/home/felix/.config/yarn/global/node_modules/.bin:/home/felix/.local/bin:/opt/google-cloud-cli/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/home/felix/.npm/global-packages/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/lib/rustup/bin:/var/lib/snapd/snap/bin CUDA_PATH=/opt/cuda LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama:/usr/lib/ollama/runners/cpu_avx2]"
time=2024-12-18T22:54:22.515+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-12-18T22:54:22.515+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2024-12-18T22:54:22.515+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2024-12-18T22:54:22.518+01:00 level=INFO source=runner.go:946 msg="starting go runner"
time=2024-12-18T22:54:22.518+01:00 level=INFO source=runner.go:947 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=8
time=2024-12-18T22:54:22.519+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:41657"
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Llama-3.2
llama_model_loader: - kv 5: general.size_label str = 3B
llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 8: llama.block_count u32 = 28
llama_model_loader: - kv 9: llama.context_length u32 = 131072
llama_model_loader: - kv 10: llama.embedding_length u32 = 3072
llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
llama_model_loader: - kv 12: llama.attention.head_count u32 = 24
llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 16: llama.attention.key_length u32 = 128
llama_model_loader: - kv 17: llama.attention.value_length u32 = 128
llama_model_loader: - kv 18: general.file_type u32 = 15
llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 58 tensors
llama_model_loader: - type q4_K: 168 tensors
llama_model_loader: - type q6_K: 29 tensors
time=2024-12-18T22:54:22.767+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 3072
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 24
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 3
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 8192
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 3B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 3.21 B
llm_load_print_meta: model size = 1.87 GiB (5.01 BPW)
llm_load_print_meta: general.name = Llama 3.2 3B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.12 MiB
llm_load_tensors: CPU buffer size = 1918.35 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
time=2024-12-18T22:54:23.018+01:00 level=DEBUG source=server.go:600 msg="model load progress 1.00"
llama_kv_cache_init: CPU KV buffer size = 896.00 MiB
llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.00 MiB
llama_new_context_with_model: CPU compute buffer size = 424.01 MiB
llama_new_context_with_model: graph nodes = 902
llama_new_context_with_model: graph splits = 1
time=2024-12-18T22:54:23.270+01:00 level=INFO source=server.go:594 msg="llama runner started in 0.75 seconds"
time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
[GIN] 2024/12/18 - 22:54:23 | 200 | 968.500554ms | 127.0.0.1 | POST "/api/generate"
time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:466 msg="context for request finished"
time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff duration=5m0s
time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff refCount=0
```
Here
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.1
|
{
"login": "felixniemeyer",
"id": 5720176,
"node_id": "MDQ6VXNlcjU3MjAxNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5720176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixniemeyer",
"html_url": "https://github.com/felixniemeyer",
"followers_url": "https://api.github.com/users/felixniemeyer/followers",
"following_url": "https://api.github.com/users/felixniemeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/felixniemeyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felixniemeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felixniemeyer/subscriptions",
"organizations_url": "https://api.github.com/users/felixniemeyer/orgs",
"repos_url": "https://api.github.com/users/felixniemeyer/repos",
"events_url": "https://api.github.com/users/felixniemeyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/felixniemeyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8164/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/847
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/847/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/847/comments
|
https://api.github.com/repos/ollama/ollama/issues/847/events
|
https://github.com/ollama/ollama/pull/847
| 1,953,059,144
|
PR_kwDOJ0Z1Ps5dUSRs
| 847
|
new readline library
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-10-19T21:09:43
| 2023-10-28T14:12:05
| 2023-10-25T23:41:19
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/847",
"html_url": "https://github.com/ollama/ollama/pull/847",
"diff_url": "https://github.com/ollama/ollama/pull/847.diff",
"patch_url": "https://github.com/ollama/ollama/pull/847.patch",
"merged_at": "2023-10-25T23:41:18"
}
|
This is simplified version of the readline library which cuts out a lot of the complexity of the version that we were using. There's still a few things to add like "history" and getting the multi-line prompts working correctly, but most (many?) things should be more or less working, including:
* Each of the Ctrl-? chars (Ctrl-d, Ctrl-k, Ctrl-c, Ctrl-u, Ctrl-a, Ctrl-e, Ctrl-l, etc.)
* Line wrap with backspace/arrow keys
* Entering/exiting raw mode
Would love some feedback if people could try it out.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/847/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/847/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5528
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5528/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5528/comments
|
https://api.github.com/repos/ollama/ollama/issues/5528/events
|
https://github.com/ollama/ollama/issues/5528
| 2,394,047,928
|
I_kwDOJ0Z1Ps6OskW4
| 5,528
|
Error Pulling Manifest MacOSX
|
{
"login": "Moonlight1220",
"id": 172665223,
"node_id": "U_kgDOCkqphw",
"avatar_url": "https://avatars.githubusercontent.com/u/172665223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moonlight1220",
"html_url": "https://github.com/Moonlight1220",
"followers_url": "https://api.github.com/users/Moonlight1220/followers",
"following_url": "https://api.github.com/users/Moonlight1220/following{/other_user}",
"gists_url": "https://api.github.com/users/Moonlight1220/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moonlight1220/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moonlight1220/subscriptions",
"organizations_url": "https://api.github.com/users/Moonlight1220/orgs",
"repos_url": "https://api.github.com/users/Moonlight1220/repos",
"events_url": "https://api.github.com/users/Moonlight1220/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moonlight1220/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-07T13:04:52
| 2024-08-10T11:43:32
| 2024-07-09T14:38:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
### Issue #5499 Continued
This issue is associate with the issue #5499, please see that issue for more context, after some testing on my Windows 11 Hyper-V machine I can confidently say this bug is exclusive to MacOS, please let me know if you have any ideas on how I can get this up and running again.
### OS
macOS
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5528/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/494
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/494/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/494/comments
|
https://api.github.com/repos/ollama/ollama/issues/494/events
|
https://github.com/ollama/ollama/pull/494
| 1,887,156,814
|
PR_kwDOJ0Z1Ps5Z2YVS
| 494
|
Remove already applied patches
|
{
"login": "avri-schneider",
"id": 6785181,
"node_id": "MDQ6VXNlcjY3ODUxODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6785181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avri-schneider",
"html_url": "https://github.com/avri-schneider",
"followers_url": "https://api.github.com/users/avri-schneider/followers",
"following_url": "https://api.github.com/users/avri-schneider/following{/other_user}",
"gists_url": "https://api.github.com/users/avri-schneider/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avri-schneider/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avri-schneider/subscriptions",
"organizations_url": "https://api.github.com/users/avri-schneider/orgs",
"repos_url": "https://api.github.com/users/avri-schneider/repos",
"events_url": "https://api.github.com/users/avri-schneider/events{/privacy}",
"received_events_url": "https://api.github.com/users/avri-schneider/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-09-08T07:52:27
| 2023-09-09T17:36:24
| 2023-09-08T14:21:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/494",
"html_url": "https://github.com/ollama/ollama/pull/494",
"diff_url": "https://github.com/ollama/ollama/pull/494.diff",
"patch_url": "https://github.com/ollama/ollama/pull/494.patch",
"merged_at": null
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/494/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4644
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4644/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4644/comments
|
https://api.github.com/repos/ollama/ollama/issues/4644/events
|
https://github.com/ollama/ollama/issues/4644
| 2,317,415,905
|
I_kwDOJ0Z1Ps6KIPXh
| 4,644
|
more types of models
|
{
"login": "zsq2010",
"id": 4374659,
"node_id": "MDQ6VXNlcjQzNzQ2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4374659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsq2010",
"html_url": "https://github.com/zsq2010",
"followers_url": "https://api.github.com/users/zsq2010/followers",
"following_url": "https://api.github.com/users/zsq2010/following{/other_user}",
"gists_url": "https://api.github.com/users/zsq2010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zsq2010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsq2010/subscriptions",
"organizations_url": "https://api.github.com/users/zsq2010/orgs",
"repos_url": "https://api.github.com/users/zsq2010/repos",
"events_url": "https://api.github.com/users/zsq2010/events{/privacy}",
"received_events_url": "https://api.github.com/users/zsq2010/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-26T03:28:38
| 2024-07-25T23:24:15
| 2024-07-25T23:24:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
could we have more type of modles like,vision model,tts,ocr,etc
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4644/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6018
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6018/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6018/comments
|
https://api.github.com/repos/ollama/ollama/issues/6018/events
|
https://github.com/ollama/ollama/issues/6018
| 2,433,585,663
|
I_kwDOJ0Z1Ps6RDZH_
| 6,018
|
max retries exceeded: unexpected EOF
|
{
"login": "davidsolal",
"id": 128038753,
"node_id": "U_kgDOB6G3YQ",
"avatar_url": "https://avatars.githubusercontent.com/u/128038753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidsolal",
"html_url": "https://github.com/davidsolal",
"followers_url": "https://api.github.com/users/davidsolal/followers",
"following_url": "https://api.github.com/users/davidsolal/following{/other_user}",
"gists_url": "https://api.github.com/users/davidsolal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidsolal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidsolal/subscriptions",
"organizations_url": "https://api.github.com/users/davidsolal/orgs",
"repos_url": "https://api.github.com/users/davidsolal/repos",
"events_url": "https://api.github.com/users/davidsolal/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidsolal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-07-27T18:34:04
| 2024-09-04T04:19:49
| 2024-09-04T04:19:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
> The bug shown in this issue specifically is now fixed as we run models directly rather than in a subprocess. Although there are still on EOF errors. If anyone else sees an EOF please open a new issue so we can triage it appropriately.
_Originally posted by @BruceMacD in https://github.com/ollama/ollama/issues/1158#issuecomment-1989152214_
Hi, I am new to reporting issues, found this issue is closed but also read I should open a new one if it happens again.
Anything I can do to help finding the origin of the issue ?

|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6018/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2202
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2202/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2202/comments
|
https://api.github.com/repos/ollama/ollama/issues/2202/events
|
https://github.com/ollama/ollama/pull/2202
| 2,101,738,189
|
PR_kwDOJ0Z1Ps5lIlwY
| 2,202
|
Add chat app
|
{
"login": "Yuan-ManX",
"id": 68322456,
"node_id": "MDQ6VXNlcjY4MzIyNDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/68322456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yuan-ManX",
"html_url": "https://github.com/Yuan-ManX",
"followers_url": "https://api.github.com/users/Yuan-ManX/followers",
"following_url": "https://api.github.com/users/Yuan-ManX/following{/other_user}",
"gists_url": "https://api.github.com/users/Yuan-ManX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yuan-ManX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yuan-ManX/subscriptions",
"organizations_url": "https://api.github.com/users/Yuan-ManX/orgs",
"repos_url": "https://api.github.com/users/Yuan-ManX/repos",
"events_url": "https://api.github.com/users/Yuan-ManX/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yuan-ManX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-26T07:42:31
| 2024-02-20T02:08:50
| 2024-02-20T02:08:49
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2202",
"html_url": "https://github.com/ollama/ollama/pull/2202",
"diff_url": "https://github.com/ollama/ollama/pull/2202.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2202.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2202/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4992
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4992/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4992/comments
|
https://api.github.com/repos/ollama/ollama/issues/4992/events
|
https://github.com/ollama/ollama/issues/4992
| 2,347,757,142
|
I_kwDOJ0Z1Ps6L7-5W
| 4,992
|
error pulling llama2 manifest
|
{
"login": "adityapandit1798",
"id": 50072336,
"node_id": "MDQ6VXNlcjUwMDcyMzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/50072336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adityapandit1798",
"html_url": "https://github.com/adityapandit1798",
"followers_url": "https://api.github.com/users/adityapandit1798/followers",
"following_url": "https://api.github.com/users/adityapandit1798/following{/other_user}",
"gists_url": "https://api.github.com/users/adityapandit1798/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adityapandit1798/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adityapandit1798/subscriptions",
"organizations_url": "https://api.github.com/users/adityapandit1798/orgs",
"repos_url": "https://api.github.com/users/adityapandit1798/repos",
"events_url": "https://api.github.com/users/adityapandit1798/events{/privacy}",
"received_events_url": "https://api.github.com/users/adityapandit1798/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-06-12T04:04:53
| 2024-06-12T04:05:32
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### can't pull manifests
ollama pull llama2:7b
pulling manifest
Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/89/8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240612%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240612T040009Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=15b16e1b97cea3f5e085e8ba72d2eeb522de4115e625c2e46b552912bb364488": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
ollama pull llama2:7b
pulling manifest
Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/89/8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240612%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240612T040009Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=15b16e1b97cea3f5e085e8ba72d2eeb522de4115e625c2e46b552912bb364488": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
tried the docker way also , same error
### OS
WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.43
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4992/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5645
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5645/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5645/comments
|
https://api.github.com/repos/ollama/ollama/issues/5645/events
|
https://github.com/ollama/ollama/pull/5645
| 2,404,789,175
|
PR_kwDOJ0Z1Ps51Lc6e
| 5,645
|
Clean up old files when installing on Windows
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-12T05:32:49
| 2024-07-12T15:13:34
| 2024-07-12T05:53:46
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5645",
"html_url": "https://github.com/ollama/ollama/pull/5645",
"diff_url": "https://github.com/ollama/ollama/pull/5645.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5645.patch",
"merged_at": "2024-07-12T05:53:46"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5645/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2234
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2234/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2234/comments
|
https://api.github.com/repos/ollama/ollama/issues/2234/events
|
https://github.com/ollama/ollama/issues/2234
| 2,103,796,127
|
I_kwDOJ0Z1Ps59ZWGf
| 2,234
|
:memo: Better description for `openchat-3.5-0106-laser`
|
{
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/followers",
"following_url": "https://api.github.com/users/adriens/following{/other_user}",
"gists_url": "https://api.github.com/users/adriens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adriens/subscriptions",
"organizations_url": "https://api.github.com/users/adriens/orgs",
"repos_url": "https://api.github.com/users/adriens/repos",
"events_url": "https://api.github.com/users/adriens/events{/privacy}",
"received_events_url": "https://api.github.com/users/adriens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-01-27T20:35:35
| 2024-01-27T20:55:53
| 2024-01-27T20:55:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# :grey_question: About
In the following [tweet](https://twitter.com/ivanfioravanti/status/1751329888231915725)

, the `openchat-3.5-0106-laser` model is known for having _Strong math capabilities without compromise!_.
**:point_right: Still on [its `ollama` page](https://ollama.ai/ifioravanti/openchat-3.5-0106-laser), there is no mention of that in the model description:**

# :pray: Documentation request
In addition to the following description:
> "A laser version of [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)"
Would you add something like _Strong mathematics capabilities without compromise!_
# :moneybag: Benefits
- Better indexation (includig on Google)
- More RAG opportunities on top of `ollama` library
```sql
SELECT fts_main_model_details.match_bm25(id, 'math') AS score,
id,
full_desc
FROM model_details
WHERE
score IS NOT NULL
ORDER BY score desc;
```

|
{
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/followers",
"following_url": "https://api.github.com/users/adriens/following{/other_user}",
"gists_url": "https://api.github.com/users/adriens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adriens/subscriptions",
"organizations_url": "https://api.github.com/users/adriens/orgs",
"repos_url": "https://api.github.com/users/adriens/repos",
"events_url": "https://api.github.com/users/adriens/events{/privacy}",
"received_events_url": "https://api.github.com/users/adriens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2234/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2234/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8534
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8534/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8534/comments
|
https://api.github.com/repos/ollama/ollama/issues/8534/events
|
https://github.com/ollama/ollama/issues/8534
| 2,804,315,742
|
I_kwDOJ0Z1Ps6nJnZe
| 8,534
|
Llama 3.1 sha256 mismatch
|
{
"login": "xihuai18",
"id": 23721828,
"node_id": "MDQ6VXNlcjIzNzIxODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/23721828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xihuai18",
"html_url": "https://github.com/xihuai18",
"followers_url": "https://api.github.com/users/xihuai18/followers",
"following_url": "https://api.github.com/users/xihuai18/following{/other_user}",
"gists_url": "https://api.github.com/users/xihuai18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xihuai18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xihuai18/subscriptions",
"organizations_url": "https://api.github.com/users/xihuai18/orgs",
"repos_url": "https://api.github.com/users/xihuai18/repos",
"events_url": "https://api.github.com/users/xihuai18/events{/privacy}",
"received_events_url": "https://api.github.com/users/xihuai18/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-22T12:53:01
| 2025-01-22T20:11:00
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
<img width="909" alt="Image" src="https://github.com/user-attachments/assets/a8f79e64-2f9b-4a6f-b5cc-a1534c8479b5" />
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8534/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4330
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4330/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4330/comments
|
https://api.github.com/repos/ollama/ollama/issues/4330/events
|
https://github.com/ollama/ollama/pull/4330
| 2,290,515,250
|
PR_kwDOJ0Z1Ps5vI_pV
| 4,330
|
cache and reuse intermediate blobs
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-10T23:17:44
| 2024-05-20T21:38:53
| 2024-05-20T20:54:42
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4330",
"html_url": "https://github.com/ollama/ollama/pull/4330",
"diff_url": "https://github.com/ollama/ollama/pull/4330.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4330.patch",
"merged_at": "2024-05-20T20:54:41"
}
|
particularly useful for zipfiles and f16s
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4330/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5609
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5609/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5609/comments
|
https://api.github.com/repos/ollama/ollama/issues/5609/events
|
https://github.com/ollama/ollama/pull/5609
| 2,401,398,711
|
PR_kwDOJ0Z1Ps51AK3K
| 5,609
|
Return 405 for Unsupported Methods on Endpoints, 204 for Cross-Origin OPTIONS
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2024-07-10T18:25:26
| 2024-08-12T18:41:54
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5609",
"html_url": "https://github.com/ollama/ollama/pull/5609",
"diff_url": "https://github.com/ollama/ollama/pull/5609.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5609.patch",
"merged_at": null
}
|
Resolves #5483
Previously `curl -X POST http://localhost:11434/api/ps` --> 404 (corrected to 405)
Resolves #5294
Previously, `curl -X OPTIONS http://localhost:11434/api/chat` --> 204
`curl -X OPTIONS http://127.0.0.1:11434/api/chat` --> 404 (corrected to 204)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5609/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5526
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5526/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5526/comments
|
https://api.github.com/repos/ollama/ollama/issues/5526/events
|
https://github.com/ollama/ollama/issues/5526
| 2,393,916,063
|
I_kwDOJ0Z1Ps6OsEKf
| 5,526
|
Models Created from GGUF File Missing from api/models Endpoint (after some time) Despite Appearing in ollama list
|
{
"login": "chrisoutwright",
"id": 27736055,
"node_id": "MDQ6VXNlcjI3NzM2MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/27736055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisoutwright",
"html_url": "https://github.com/chrisoutwright",
"followers_url": "https://api.github.com/users/chrisoutwright/followers",
"following_url": "https://api.github.com/users/chrisoutwright/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisoutwright/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisoutwright/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisoutwright/subscriptions",
"organizations_url": "https://api.github.com/users/chrisoutwright/orgs",
"repos_url": "https://api.github.com/users/chrisoutwright/repos",
"events_url": "https://api.github.com/users/chrisoutwright/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisoutwright/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-07-07T06:34:45
| 2024-08-20T21:09:59
| 2024-08-20T21:09:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
### Issue with Getting Model over Ollama AP api/models after Creating Model in Ollama
#### Steps to Reproduce
1. Execute the following command to merge GGUF files:
example
```shell
C:\Users\Chris>D:\llama-b3066-bin-win-avx512-x64\gguf-split --merge C:\Users\Chris\Downloads\Q4_K_M-00001-of-0002.gguf E:\openbiollm-llama-3_Q4_K_M.gguf
```
Output:
```
gguf_merge: C:\Users\Chris\Downloads\Q4_K_M-00001-of-00002.gguf -> E:\openbiollm-llama-3_Q4_K_M.gguf
gguf_merge: reading metadata C:\Users\Chris\Downloads\Q4_K_M-00001-of-00002.gguf ...done
gguf_merge: reading metadata C:\Users\Chris\Downloads\Q4_K_M-00002-of-00002.gguf ...done
gguf_merge: writing tensors C:\Users\Chris\Downloads\Q4_K_M-00001-of-00002.gguf ...done
gguf_merge: writing tensors C:\Users\Chris\Downloads\Q4_K_M-00002-of-00002.gguf ...done
gguf_merge: E:\openbiollm-llama-3_Q4_K_M.gguf merged from 2 split with 723 tensors.
```
3. Attempt to create a new model in Ollama:
```shell
C:\Users\Chris>ollama create openbiollm-llama-3_Q4_K_M -f E:\openbiollm-llama-3_Q4_K_M.modelfile
```
Output:
```
transferring model data ⠋
using existing layer sha256:8f3b57672da97273e31ea197ac93d6b44ec2c00af914c43c141f8a7571a3c844
using existing layer sha256:2190828de961641d5a7b034d11c3e34f3a7e91e9ec195309770fb337231cc085
using existing layer sha256:e9d486088426ca9362945844228c16a846e49f4d310c638fc79b60e780e46045
using existing layer sha256:577073ffcc6ce95b9981eacc77d1039568639e5638e83044994560d9ef82ce1b
using existing layer sha256:f2f08b422a621fe6eab6361c3cb1b66526f90b2db8ed4b54b8d5e06b8a5464f9
writing manifest
success
```
#### Issue
The Ollama tool did complete the creation of the new model successfully. Additionally, the `ollama list` command does show the new model, but over api/models the models is missing from the API/models list. (it happend after some hours, I did change the System template afterwords (so that is not corresponding to the modelfile:
Original modelfile:
```
FROM ./openbiollm-llama-3_Q4_K_M.gguf
TEMPLATE "{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"
PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>
PARAMETER num_keep 24
SYSTEM Format the reply in MarkDown format.
```
#### Expected Behavior
The ollama create command should complete successfully and the new model should appear in the list of available models when using ollama list. Additionally, it should be listed for access in api/models, allowing for its usage in applications using the api such as Open WebUI.
### Questions
- Can the use of uppercase characters in the model name (e.g., openbiollm-llama-3_Q4_K_M) cause issues with model API listing?
- What changes could cause the API iterator to skip over the newly created model?
#### Environment
- **OS:** Windows 10
- **Tool Version:** Ollama version (0.1.48)
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5526/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5526/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/4975
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4975/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4975/comments
|
https://api.github.com/repos/ollama/ollama/issues/4975/events
|
https://github.com/ollama/ollama/issues/4975
| 2,346,038,035
|
I_kwDOJ0Z1Ps6L1bMT
| 4,975
|
Is RTX 4070 and not RTX 4070ti supported - ambigous documentation
|
{
"login": "thinkrapido",
"id": 1568087,
"node_id": "MDQ6VXNlcjE1NjgwODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1568087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thinkrapido",
"html_url": "https://github.com/thinkrapido",
"followers_url": "https://api.github.com/users/thinkrapido/followers",
"following_url": "https://api.github.com/users/thinkrapido/following{/other_user}",
"gists_url": "https://api.github.com/users/thinkrapido/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thinkrapido/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thinkrapido/subscriptions",
"organizations_url": "https://api.github.com/users/thinkrapido/orgs",
"repos_url": "https://api.github.com/users/thinkrapido/repos",
"events_url": "https://api.github.com/users/thinkrapido/events{/privacy}",
"received_events_url": "https://api.github.com/users/thinkrapido/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-11T10:22:49
| 2024-06-14T00:07:46
| 2024-06-14T00:07:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello,
my prompts to ollama model codellama:34b-code-q6_K is taking very long to process.
And in the CPU Monitor many CPUs get envolved when calculating an answer.
What am I doing wrong? Is it a bug or do I have to bear with it?
I expect answers within a second delay.
The documentation at NVIDIA (https://www.nvidia.com/de-de/geforce/graphics-cards/40-series/rtx-4070-family/) says that it is CUDA-Enabled, but on https://github.com/ollama/ollama/blob/main/docs/gpu.md it is not listed for capability 8.9.
I'm using a linux system with the latest CUDA Libraries enabled.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.1.24
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4975/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/30
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/30/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/30/comments
|
https://api.github.com/repos/ollama/ollama/issues/30/events
|
https://github.com/ollama/ollama/issues/30
| 1,783,169,821
|
I_kwDOJ0Z1Ps5qSQMd
| 30
|
cli feedback for models already downloaded
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-06-30T21:09:28
| 2023-07-04T14:29:34
| 2023-07-04T14:29:34
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
In the case of having the model downloaded already, it would just not output anything.
```
ollama pull huggingface.co/TheBloke/orca_mini_3B-GGML
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/30/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/30/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1701
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1701/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1701/comments
|
https://api.github.com/repos/ollama/ollama/issues/1701/events
|
https://github.com/ollama/ollama/issues/1701
| 2,055,343,482
|
I_kwDOJ0Z1Ps56gg16
| 1,701
|
Create uninstall script
|
{
"login": "vtrenton",
"id": 85969349,
"node_id": "MDQ6VXNlcjg1OTY5MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/85969349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vtrenton",
"html_url": "https://github.com/vtrenton",
"followers_url": "https://api.github.com/users/vtrenton/followers",
"following_url": "https://api.github.com/users/vtrenton/following{/other_user}",
"gists_url": "https://api.github.com/users/vtrenton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vtrenton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vtrenton/subscriptions",
"organizations_url": "https://api.github.com/users/vtrenton/orgs",
"repos_url": "https://api.github.com/users/vtrenton/repos",
"events_url": "https://api.github.com/users/vtrenton/events{/privacy}",
"received_events_url": "https://api.github.com/users/vtrenton/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 5
| 2023-12-25T03:47:26
| 2024-09-06T04:53:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, it would be nice to have an uninstall script to automate the uninstall process specified here: https://github.com/jmorganca/ollama/blob/main/docs/linux.md#uninstall adding a PR to this issue with something i made that I'd like to contribute.
Happy Holidays! :)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1701/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1701/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5380
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5380/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5380/comments
|
https://api.github.com/repos/ollama/ollama/issues/5380/events
|
https://github.com/ollama/ollama/issues/5380
| 2,381,547,776
|
I_kwDOJ0Z1Ps6N84kA
| 5,380
|
Ollama Run provides numerical choice to run one of models from list
|
{
"login": "rayking99",
"id": 85595170,
"node_id": "MDQ6VXNlcjg1NTk1MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/85595170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rayking99",
"html_url": "https://github.com/rayking99",
"followers_url": "https://api.github.com/users/rayking99/followers",
"following_url": "https://api.github.com/users/rayking99/following{/other_user}",
"gists_url": "https://api.github.com/users/rayking99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rayking99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rayking99/subscriptions",
"organizations_url": "https://api.github.com/users/rayking99/orgs",
"repos_url": "https://api.github.com/users/rayking99/repos",
"events_url": "https://api.github.com/users/rayking99/events{/privacy}",
"received_events_url": "https://api.github.com/users/rayking99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-06-29T07:13:48
| 2024-06-29T23:17:02
| 2024-06-29T23:15:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I think it would be cool if ollama run without any extra arguments showed the models in ollama list, but with a number next to them.
Ie ollama run ->
```sh
TYPE NUMBER OF MODEL TO RUN
[0] gemma2:27b-instruct-q8_0
[1] qwen2:0.5b
[2] mistral:7b-instruct-v0.3-q8_0
[3] gemma:2b-instruct
[4] phi3:3.8b-mini-instruct-4k-fp16
[5] llama3:8b-instruct-fp16
[6] llama3:70b-instruct-q8_0
```
My workflow is always ollama list -> ollama run model-i-copy-pasted
Apologies if this has already been requested or if it isn't in the vision. Running in the shell is great, this would be a nice touch.
|
{
"login": "rayking99",
"id": 85595170,
"node_id": "MDQ6VXNlcjg1NTk1MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/85595170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rayking99",
"html_url": "https://github.com/rayking99",
"followers_url": "https://api.github.com/users/rayking99/followers",
"following_url": "https://api.github.com/users/rayking99/following{/other_user}",
"gists_url": "https://api.github.com/users/rayking99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rayking99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rayking99/subscriptions",
"organizations_url": "https://api.github.com/users/rayking99/orgs",
"repos_url": "https://api.github.com/users/rayking99/repos",
"events_url": "https://api.github.com/users/rayking99/events{/privacy}",
"received_events_url": "https://api.github.com/users/rayking99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5380/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7520
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7520/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7520/comments
|
https://api.github.com/repos/ollama/ollama/issues/7520/events
|
https://github.com/ollama/ollama/issues/7520
| 2,637,042,503
|
I_kwDOJ0Z1Ps6dLhNH
| 7,520
|
Build instructions in https://github.com/ollama/ollama/blob/main/llama/README.md are outdated or non-functional
|
{
"login": "yeahdongcn",
"id": 2831050,
"node_id": "MDQ6VXNlcjI4MzEwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeahdongcn",
"html_url": "https://github.com/yeahdongcn",
"followers_url": "https://api.github.com/users/yeahdongcn/followers",
"following_url": "https://api.github.com/users/yeahdongcn/following{/other_user}",
"gists_url": "https://api.github.com/users/yeahdongcn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeahdongcn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeahdongcn/subscriptions",
"organizations_url": "https://api.github.com/users/yeahdongcn/orgs",
"repos_url": "https://api.github.com/users/yeahdongcn/repos",
"events_url": "https://api.github.com/users/yeahdongcn/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeahdongcn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g",
"url": "https://api.github.com/repos/ollama/ollama/labels/build",
"name": "build",
"color": "006b75",
"default": false,
"description": "Issues relating to building ollama from source"
}
] |
open
| false
| null |
[] | null | 2
| 2024-11-06T04:58:03
| 2024-11-17T14:08:53
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Following the build instructions in [README.md](https://github.com/ollama/ollama/blob/main/llama/README.md#cuda) for Linux/CUDA results in an error when running `make ggml_cuda.so`. The error is:
```bash
make: *** No rule to make target 'ggml_cuda.so'. Stop.
```
Could you please confirm if the documentation needs to be updated?
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
main branch
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7520/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7520/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1916
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1916/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1916/comments
|
https://api.github.com/repos/ollama/ollama/issues/1916/events
|
https://github.com/ollama/ollama/pull/1916
| 2,075,490,529
|
PR_kwDOJ0Z1Ps5jvqSp
| 1,916
|
download: add inactivity monitor
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-01-11T00:55:08
| 2024-01-26T18:56:01
| 2024-01-26T18:56:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1916",
"html_url": "https://github.com/ollama/ollama/pull/1916",
"diff_url": "https://github.com/ollama/ollama/pull/1916.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1916.patch",
"merged_at": "2024-01-26T18:56:01"
}
|
If a download part is inactive for some time, restart it. From profiling, it's possible for one or more of the download parts to stall and receive no content from the storage backend for many consecutive seconds.
This generally causes the download to slow to a rate of near zero at the end as other, faster parts complete their download.
This change adds a monitor for each part. If the part doesn't receive data (0 bytes) for a given window (5 seconds), the monitor will trigger a stall error and the request is interrupted and retried. This retry does _not_ increment the retry counter.
Related to #1736
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1916/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1916/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/653
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/653/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/653/comments
|
https://api.github.com/repos/ollama/ollama/issues/653/events
|
https://github.com/ollama/ollama/pull/653
| 1,920,087,518
|
PR_kwDOJ0Z1Ps5blIh2
| 653
|
pythonic python client
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-09-30T03:43:51
| 2024-01-11T23:52:57
| 2024-01-11T23:52:54
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/653",
"html_url": "https://github.com/ollama/ollama/pull/653",
"diff_url": "https://github.com/ollama/ollama/pull/653.diff",
"patch_url": "https://github.com/ollama/ollama/pull/653.patch",
"merged_at": null
}
|
new features:
- chat
```python
client.chat('name', messages=[
{
'role': 'system',
'content': 'you are a good bot',
},
])
```
- create with a string input instead of a file
```python
client.create('name', modelfile='''
FROM llama2
PARAMETER stop </s>
''')
```
key differences:
- errors are not caught since they should be handled by the caller, e.g. for authorization
- responses are either returned in full if `stream=False` or return as a generator if `stream=True`
- stream errors are raised
- no callbacks
- no prints
- use PEP257 doc strings
example usage:
non-streaming:
```shell
$ PYTHONPATH=/path/to/ollama/repo python -c 'import api.client; print(api.client.generate("llama2", "hello"))'
{'model': 'llama2', 'created_at': '2023-09-30T03:42:23.93685Z', 'done': True, 'context': [29961, 25580, 29962, 22172, 518, 29914, 25580, 29962, 29871, 15043, 29991, 739, 29915, 29879, 7575, 304, 5870, 366, 29889, 26077, 29991, 1128, 508, 306, 1371, 366, 9826, 29973], 'total_duration': 2113510125, 'load_duration': 1169916, 'prompt_eval_count': 1, 'eval_count': 20, 'eval_duration': 2071018000, 'response': " Hello! It's nice to meet you. everybody! How can I help you today?"}
```
```python
import api.client
print(api.client.generate('llama2', 'hello'))
```
streaming:
```shell
$ PYTHONPATH=/path/to/ollama/repo python -c 'import api.client; [print(x) for x in api.client.generate("llama2", "hello", stream=True)]'
{'model': 'llama2', 'created_at': '2023-09-30T04:05:23.293429Z', 'response': ' Hello', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:23.37183Z', 'response': '!', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:23.447804Z', 'response': ' It', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:23.524626Z', 'response': "'", 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:23.599881Z', 'response': 's', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:23.675419Z', 'response': ' nice', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:23.751513Z', 'response': ' to', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:23.828406Z', 'response': ' meet', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:23.903462Z', 'response': ' you', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:23.980436Z', 'response': '.', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:24.056455Z', 'response': ' nobody', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:24.133128Z', 'response': '.', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:24.209594Z', 'response': ' How', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:24.295103Z', 'response': ' can', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:24.374437Z', 'response': ' I', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:24.449904Z', 'response': ' help', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:24.525837Z', 'response': ' you', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:24.602418Z', 'response': ' today', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:24.681443Z', 'response': '?', 'done': False}
{'model': 'llama2', 'created_at': '2023-09-30T04:05:24.762006Z', 'done': True, 'context': [29961, 25580, 29962, 22172, 518, 29914, 25580, 29962, 29871, 15043, 29991, 739, 29915, 29879, 7575, 304, 5870, 366, 29889, 23196, 29889, 1128, 508, 306, 1371, 366, 9826, 29973], 'total_duration': 2571336583, 'load_duration': 1762500, 'prompt_eval_count': 1, 'eval_count': 20, 'eval_duration': 2495695000}
```
```python
import api.client
for chunk in api.client.generate('llama2', 'hello', stream=True):
print(chunk)
```
streaming just the response text:
```shell
$ PYTHONPATH=/path/to/ollama/repo python -c 'import api.client; [print(x.get("response", ""), end="", flush=True) for x in api.client.generate("llama2", "hello", stream=True)]; print()'
Hello! It's nice to meet you. surely. How can I assist you today? Is there something on your mind that you would like to talk about or ask?
```
```python
import api.client
for chunk in api.client.generate('llama2', 'hello', stream=True):
print(chunk.get('response', ''), end='', flush=True)
print()
```
Here's an example of llava in a xkcd explainer:
```python
import requests
import api.client as client
r = requests.get('https://imgs.xkcd.com/comics/standards.png')
r.raise_for_status()
for r in client.generate('mike/llava:13b', 'explain this comic', images=[r.content], stream=True):
print(r.get('response'), end='', flush=True)
```
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/653/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5786
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5786/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5786/comments
|
https://api.github.com/repos/ollama/ollama/issues/5786/events
|
https://github.com/ollama/ollama/issues/5786
| 2,417,988,954
|
I_kwDOJ0Z1Ps6QH5Va
| 5,786
|
Request to add support for InternVL-2 model
|
{
"login": "CNEA-lw",
"id": 164863967,
"node_id": "U_kgDOCdOf3w",
"avatar_url": "https://avatars.githubusercontent.com/u/164863967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CNEA-lw",
"html_url": "https://github.com/CNEA-lw",
"followers_url": "https://api.github.com/users/CNEA-lw/followers",
"following_url": "https://api.github.com/users/CNEA-lw/following{/other_user}",
"gists_url": "https://api.github.com/users/CNEA-lw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CNEA-lw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CNEA-lw/subscriptions",
"organizations_url": "https://api.github.com/users/CNEA-lw/orgs",
"repos_url": "https://api.github.com/users/CNEA-lw/repos",
"events_url": "https://api.github.com/users/CNEA-lw/events{/privacy}",
"received_events_url": "https://api.github.com/users/CNEA-lw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 7
| 2024-07-19T05:49:22
| 2025-01-28T13:43:47
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It is hoped that the ollama platform can add the model InternVL-2 series.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5786/reactions",
"total_count": 6,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5786/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6514
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6514/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6514/comments
|
https://api.github.com/repos/ollama/ollama/issues/6514/events
|
https://github.com/ollama/ollama/pull/6514
| 2,486,543,854
|
PR_kwDOJ0Z1Ps55bD_f
| 6,514
|
Implicit openai model parameter multiplication disabled
|
{
"login": "yaroslavyaroslav",
"id": 16612247,
"node_id": "MDQ6VXNlcjE2NjEyMjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/16612247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaroslavyaroslav",
"html_url": "https://github.com/yaroslavyaroslav",
"followers_url": "https://api.github.com/users/yaroslavyaroslav/followers",
"following_url": "https://api.github.com/users/yaroslavyaroslav/following{/other_user}",
"gists_url": "https://api.github.com/users/yaroslavyaroslav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaroslavyaroslav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaroslavyaroslav/subscriptions",
"organizations_url": "https://api.github.com/users/yaroslavyaroslav/orgs",
"repos_url": "https://api.github.com/users/yaroslavyaroslav/repos",
"events_url": "https://api.github.com/users/yaroslavyaroslav/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaroslavyaroslav/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-08-26T10:40:00
| 2024-10-29T11:48:13
| 2024-09-07T00:45:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6514",
"html_url": "https://github.com/ollama/ollama/pull/6514",
"diff_url": "https://github.com/ollama/ollama/pull/6514.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6514.patch",
"merged_at": "2024-09-07T00:45:45"
}
|
Current state of openai.go setup makes absolutely valid openai config to be broken. This happens because of implicit doubling the config numbers performed in it.
I see the idea of making OpenAI API endpoint compatible with native ollama endpoint, but I think it've made wrong, as again, it leads to completely valid OpenAI config makes model goes wild.
Thus user is unable to use the same config for openai and ollama without modifications outside of model's field.
```json
{
"url": "http://localhost:11434",
"token": "sk-your-token",
"status_hint": [
"name",
"prompt_mode",
"chat_model"
],
"assistants": [
{
"name": "qwen2",
"chat_model": "qwen2:1.5b",
"assistant_role": "You are a senior python and sublime text 4 code assistant",
"prompt_mode": "panel",
"temperature": 1, // makes model go insane, coz the temperature is 2 on ollama's side.
"max_tokens": 1048,
"top_p": 1,
"frequency_penalty": 0, // doubles as well
"presence_penalty": 0, // doubles as well
}
]
}
```
Closes: #6492
Affects: https://github.com/yaroslavyaroslav/OpenAI-sublime-text/issues/57
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6514/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8011
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8011/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8011/comments
|
https://api.github.com/repos/ollama/ollama/issues/8011/events
|
https://github.com/ollama/ollama/issues/8011
| 2,726,681,192
|
I_kwDOJ0Z1Ps6ihdpo
| 8,011
|
Underflow error when using GPU memory overhead
|
{
"login": "ProjectMoon",
"id": 183856,
"node_id": "MDQ6VXNlcjE4Mzg1Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/183856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ProjectMoon",
"html_url": "https://github.com/ProjectMoon",
"followers_url": "https://api.github.com/users/ProjectMoon/followers",
"following_url": "https://api.github.com/users/ProjectMoon/following{/other_user}",
"gists_url": "https://api.github.com/users/ProjectMoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ProjectMoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ProjectMoon/subscriptions",
"organizations_url": "https://api.github.com/users/ProjectMoon/orgs",
"repos_url": "https://api.github.com/users/ProjectMoon/repos",
"events_url": "https://api.github.com/users/ProjectMoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/ProjectMoon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-12-09T10:51:38
| 2024-12-10T17:10:41
| 2024-12-10T17:10:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
GPUs:
- AMD RX 6800 XT (16 GB VRAM)
- NVidia GTX 970 (4 GB VRAM)
I have discovered a very odd and very dangerous problem in ollama. I am running OpenWebUI on a machine that has a ROCm device (main GPU; 16 GB VRAM) and a CUDA device (ancient old NVidia GPU). The NVidia GPU has 4 GB of VRAM and I use it as a secondary GPU for small models like embedding, etc.
I'm using the CUDA variant of the OpenWebUI docker image, which it allows to run vector search re-ranking models on CUDA. This loads the reranking model (BGE reranker in my case) onto the Nvidia GPU.
All of this is fine and dandy. But the problem comes when ollama tries to run the actual main LLM I'm using (Qwen2.5 14b q5_K_M in this case).
For some reason, it seems to completely skip choosing the ROCm GPU as the GPU to load the model on, and tries to load it on the CUDA device, which promptly fails. It doesn't even fall back to CPU. This persists through restarts of ollama. No matter what, it will not consider the AMD GPU or CPU for loading any LLM, and essentially renders OpenWebUI non-functional.
I've narrowed it down to OpenWebUI loading the reranker model on the CUDA device. If OpenWebUI is shut down, everything starts working fine again in ollama. Everything also works fine in OpenWebUI until it loads the reranker model.
- But presumably this isn't specifically _because_ of OpenWebUI. I imagine it would happen with anything external to ollama that takes over the CUDA device.
I can provide debug logs if necessary.
### OS
Linux, Docker
### GPU
Nvidia, AMD
### CPU
AMD
### Ollama version
0.5.1,0.4.7
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8011/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6048
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6048/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6048/comments
|
https://api.github.com/repos/ollama/ollama/issues/6048/events
|
https://github.com/ollama/ollama/issues/6048
| 2,435,411,779
|
I_kwDOJ0Z1Ps6RKW9D
| 6,048
|
I can't run llama3.1
|
{
"login": "Saber120",
"id": 108297159,
"node_id": "U_kgDOBnR7xw",
"avatar_url": "https://avatars.githubusercontent.com/u/108297159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saber120",
"html_url": "https://github.com/Saber120",
"followers_url": "https://api.github.com/users/Saber120/followers",
"following_url": "https://api.github.com/users/Saber120/following{/other_user}",
"gists_url": "https://api.github.com/users/Saber120/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saber120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saber120/subscriptions",
"organizations_url": "https://api.github.com/users/Saber120/orgs",
"repos_url": "https://api.github.com/users/Saber120/repos",
"events_url": "https://api.github.com/users/Saber120/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saber120/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-07-29T13:18:15
| 2024-07-31T08:27:13
| 2024-07-30T16:29:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I trained my own model from llama version 3.1 8b and created the modlefile for it successfully, but when I start running it it does not work and I get this error
`
ollama run mymodle:latest
Error: llama runner process has terminated: error loading model: done_getting_tensors: wrong number of tensors; expected 292, got 291
`
Knowing that the llama 3 8b model was working and is still working well
### OS
Linux
### GPU
Nvidia
### CPU
Other
### Ollama version
0.3.0
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6048/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5410
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5410/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5410/comments
|
https://api.github.com/repos/ollama/ollama/issues/5410/events
|
https://github.com/ollama/ollama/pull/5410
| 2,384,283,202
|
PR_kwDOJ0Z1Ps50GC8j
| 5,410
|
Fix case for NumCtx
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-01T16:44:37
| 2024-07-01T16:54:23
| 2024-07-01T16:54:21
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5410",
"html_url": "https://github.com/ollama/ollama/pull/5410",
"diff_url": "https://github.com/ollama/ollama/pull/5410.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5410.patch",
"merged_at": "2024-07-01T16:54:21"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5410/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6122
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6122/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6122/comments
|
https://api.github.com/repos/ollama/ollama/issues/6122/events
|
https://github.com/ollama/ollama/pull/6122
| 2,442,737,119
|
PR_kwDOJ0Z1Ps53ItZE
| 6,122
|
llama: Implement timings response in Go server
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-01T15:16:03
| 2024-08-01T22:52:08
| 2024-08-01T22:52:06
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6122",
"html_url": "https://github.com/ollama/ollama/pull/6122",
"diff_url": "https://github.com/ollama/ollama/pull/6122.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6122.patch",
"merged_at": null
}
|
This implements the fields necessary for `run --verbose` to generate timing information.
(Examples from my [other branch wiring this into the main ollama serve](https://github.com/ollama/ollama/pull/5287))
C++ runner:
```
% ollama run orca-mini --verbose "what is the origin of independence day?"
Independence Day, also known as the Fourth of July, celebrates the adoption of the
Declaration of Independence on July 4, 1776. This document declared the United States as
a new nation and is considered a founding moment in American history. The holiday has
become an important cultural and historical event for Americans, with parades, fireworks,
barbecues, and other festivities taking place throughout the country.
total duration: 1.929017s
load duration: 1.036014583s
prompt eval count: 48 token(s)
prompt eval duration: 95.572ms
prompt eval rate: 502.24 tokens/s
eval count: 84 token(s)
eval duration: 796.532ms
eval rate: 105.46 tokens/s
```
Go runner:
```
% ollama run orca-mini --verbose "what is the origin of independence day?"
Day, also known as Canada Day, commemorates the day in 1867 when British
North America Act was passed, granting responsible government and Canada
as a self-governing dominion within the British Empire.
total duration: 3.265021459s
load duration: 535.450084ms
prompt eval count: 47 token(s)
prompt eval duration: 2.29s
prompt eval rate: 20.52 tokens/s
eval count: 48 token(s)
eval duration: 437ms
eval rate: 109.84 tokens/s
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6122/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3106
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3106/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3106/comments
|
https://api.github.com/repos/ollama/ollama/issues/3106/events
|
https://github.com/ollama/ollama/issues/3106
| 2,184,204,424
|
I_kwDOJ0Z1Ps6CMFCI
| 3,106
|
Ollama ls not included in the -h/--help flags
|
{
"login": "aosan",
"id": 8534160,
"node_id": "MDQ6VXNlcjg1MzQxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8534160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aosan",
"html_url": "https://github.com/aosan",
"followers_url": "https://api.github.com/users/aosan/followers",
"following_url": "https://api.github.com/users/aosan/following{/other_user}",
"gists_url": "https://api.github.com/users/aosan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aosan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aosan/subscriptions",
"organizations_url": "https://api.github.com/users/aosan/orgs",
"repos_url": "https://api.github.com/users/aosan/repos",
"events_url": "https://api.github.com/users/aosan/events{/privacy}",
"received_events_url": "https://api.github.com/users/aosan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-13T14:49:37
| 2024-03-15T01:46:58
| 2024-03-14T22:23:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
While working on a [Bash completion PR](https://github.com/ollama/ollama/pull/3105), I noticed the absence of ls from the arguments list for -h/--help and ollama listing.

|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3106/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1123
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1123/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1123/comments
|
https://api.github.com/repos/ollama/ollama/issues/1123/events
|
https://github.com/ollama/ollama/issues/1123
| 1,992,568,966
|
I_kwDOJ0Z1Ps52xDCG
| 1,123
|
wizard-math:7b terminator not recognized
|
{
"login": "Detlev1",
"id": 71934197,
"node_id": "MDQ6VXNlcjcxOTM0MTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/71934197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Detlev1",
"html_url": "https://github.com/Detlev1",
"followers_url": "https://api.github.com/users/Detlev1/followers",
"following_url": "https://api.github.com/users/Detlev1/following{/other_user}",
"gists_url": "https://api.github.com/users/Detlev1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Detlev1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Detlev1/subscriptions",
"organizations_url": "https://api.github.com/users/Detlev1/orgs",
"repos_url": "https://api.github.com/users/Detlev1/repos",
"events_url": "https://api.github.com/users/Detlev1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Detlev1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-11-14T11:38:17
| 2023-12-26T21:28:29
| 2023-12-26T21:28:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm using the latest version of Ollama. When using the wizard-math:7b model, the requests don't complete, and instead, a “</s>” terminator is returned. Can I configure the terminator myself to fix this, or is there a way to terminate the request through the API?
|
{
"login": "Detlev1",
"id": 71934197,
"node_id": "MDQ6VXNlcjcxOTM0MTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/71934197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Detlev1",
"html_url": "https://github.com/Detlev1",
"followers_url": "https://api.github.com/users/Detlev1/followers",
"following_url": "https://api.github.com/users/Detlev1/following{/other_user}",
"gists_url": "https://api.github.com/users/Detlev1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Detlev1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Detlev1/subscriptions",
"organizations_url": "https://api.github.com/users/Detlev1/orgs",
"repos_url": "https://api.github.com/users/Detlev1/repos",
"events_url": "https://api.github.com/users/Detlev1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Detlev1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1123/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.