url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4767
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4767/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4767/comments
|
https://api.github.com/repos/ollama/ollama/issues/4767/events
|
https://github.com/ollama/ollama/issues/4767
| 2,329,186,910
|
I_kwDOJ0Z1Ps6K1JJe
| 4,767
|
Model response corruption and leaking data between session.
|
{
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.com/users/MarkWard0110/followers",
"following_url": "https://api.github.com/users/MarkWard0110/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkWard0110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkWard0110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkWard0110/subscriptions",
"organizations_url": "https://api.github.com/users/MarkWard0110/orgs",
"repos_url": "https://api.github.com/users/MarkWard0110/repos",
"events_url": "https://api.github.com/users/MarkWard0110/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkWard0110/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-06-01T14:58:55
| 2024-06-25T13:46:38
| 2024-06-25T13:46:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
`main` when running a model (specifically `llama3:8b-instruct-fp16` will begin to generate gibberish. It will also leak state between sessions. Swapping out the models will reset the issue, but it will quickly return after a few runs against the model.
This issue does not happen with 0.1.38. `main` began to do this before the 0.1.39 was released. I will try to track down the specific commit.
I have not upgraded to Ollama's latest version (0.1.39) because of what I am seeing with the latest `main`.
The behavior happened with nvidia driver version 550 and the updated driver.
Hardware
NVidia RTX 4070 TI Super 16GB
```
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.02 Driver Version: 555.42.02 CUDA Version: 12.5 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4070 ... Off | 00000000:01:00.0 Off | N/A |
| 0% 36C P8 9W / 285W | 5MiB / 16376MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
```
In the following `ollama run` output, I will begin with a freshly started ollama serv using the latest build off `main`. The chat will start ok but then begin to generate gibberish. I will exit the chat and start a new session. The first prompt will generate what appears to be information related to the previous session. Note that this behavior does not happen on 0.1.38.
first chat
```
ollama run llama3:8b-instruct-fp16
>>> Using Python write an example of the quick sort
Here is an example implementation of QuickSort in Python:
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
less = [x for x in arr[1:] if x <= pivot]
greater = [x for x in arr[1:] if x > pivot]
return quicksort(less) + [pivot] + quicksort(greater)
# Example usage:
arr = [5, 2, 8, 3, 1, 6, 4]
print(quicksort(arr)) # Output: [1, 2, 3, 4, 5, 6, 8]
Here's a brief explanation of how the algorithm works:
1. If the length of the input array is 0 or 1, return the original array (since it's already sorted).
2. Choose a pivot element from the array (in this case, we use the first element).
3. Partition the rest of the array into two lists: `less` and `greater`. `less` contains elements that are less than or equal to the pivot, while `greater` contains elements that are greater than the
pivot.
4. Recursively apply the QuickSort algorithm to `less` and `greater`.
5. Combine the results by concatenating the sorted `less` list, the pivot element, and the sorted `greater` list.
Note that this is a "Lomuto" partition scheme, which is slightly faster than the standard "Hoare" partition scheme but has a slightly different implementation.
Also, QuickSort has an average time complexity of O(n log n), but it can be O(n^2) in the worst case if the input array is already sorted or nearly sorted. To mitigate this, you can use techniques like
introducing randomness into the pivot selection or using a hybrid sorting algorithm that switches to a different algorithm (like Insertion Sort) for small inputs.
>>> How do you potty train a puppy?
Potty training a puppy requires patience, consistency, and positive reinforcement. Here's a step-by-step guide to help you potty train your furry friend:
**Preparation**
1. **Choose a designated potty area**: Pick a specific spot outside, such as a grassy patch or a puppy pad, where you want your puppy to go potty.
2. **Get the right equipment**: You'll need poop bags, treats, and praise!
3. **Establish a routine**: Puppies thrive on routine, so create a schedule for feeding, playtime, and potty breaks.
**Training**
1. **Watch for signs**: Puppies will often sniff and circle before they go potty. If you see your puppy exhibiting these behaviors, quickly take them to the designated potty area.
2. **Praise and reward**: When your puppy goes potty in the right spot, praise them enthusiastically and offer treats. This positive reinforcement will help your puppy associate going potty outside with
good behavior.
3. **Take them out frequently**: Puppies have small bladders and can't hold their urine for long periods. Take your puppy out every hour or two to prevent accidents.
4. **Clean up accidents**: If your puppy has an accident in the house, clean up the area thoroughly with a pet-safe cleaner to remove any remaining scent that might attract your puppy to the same spot
again.
5. **Supervise, supervise, supervise**: Keep an eye on your puppy at all times, especially during the early stages of potty training. This will help you catch any accidents before they happen.
**Additional Tips**
1. **Use a bell or other signal**: Teach your puppy to ring a bell or use another signal to indicate they need to go outside.
2. **Keep an eye on weather**: If it's raining or too cold, consider using puppy pads or indoor potty areas.
3. **Be patient**: Potty training can take time, and accidents will happen. Stay calm and consistent, and your puppy will learn eventually.
4. **Gradually increase freedom**: As your puppy becomes more reliable with potty training, you can gradually give them more freedom in the house, such as access to more rooms or longer periods of time
without supervision.
**Common Mistakes to Avoid**
1. **Not supervising enough**: Puppies need constant attention during the early stages of potty training.
2. **Not cleaning up accidents properly**: Leaving behind scents can attract your puppy back to the same spot again.
3. **Not rewarding good behavior**: Failing to praise and reward your puppy for going potty outside can lead to confusion and setbacks.
By following these steps, tips, and avoiding common mistakes, you'll be well on your way to potty training your puppy successfully!
>>> Using Python generate 100 random numbers
Here is a simple Python script that generates 100 random numbers:
import random
# Generate 100 random numbers
random_numbers = [random.randint(1, 100) for _ in range(100)]
print("Random Numbers:")
for num in random_numbers:
print(num)
In this script, we use the `random` module's `randint` function to generate a random integer between 1 and 100. We then use a list comprehension to create a list of 100 such numbers.
When you run this script, it will output 100 random numbers.
>>> How do you potty train a puppy?
Potty training a puppy requires patience, consistency, and positive reinforcement. Here's a step-by-step guide on how to potty train your furry friend:
**Preparation**
1. **Choose a designated potty area**: Pick a specific spot outside, such as a grassy patch or a puppy pad, where you want your puppy to go potty.
2. **Get the right equipment**: You'll need poop bags, treats, and praise!
3. **Establish a routine**: Puppies thrive on routine, so create a schedule for feeding, playtime, and potty breaks.
**Training**
1. **Watch for signs**: Puppies will often sniff and circle before they go potty. If you see your puppy exhibiting these behaviors, quickly take them to the designated potty area.
2. **Praise and reward**: When your puppy goes potty in the right spot, praise them enthusiastically and offer treats. This positive reinforcement will help your puppy associate going potty outside with
good behavior.
3. **Take them out frequently**: Puppies have small bladders and can't hold their urine for long periods. Take your puppy out every hour or two to prevent accidents.
4. **Clean up accidents**: If your puppy has an accident in the house, clean up the area thoroughly with a pet-safe cleaner to remove any remaining scent that might attract your puppy to the same spot
again.
5. **Supervise, supervise, supervise**: Keep an eye on your puppy at all times, especially during the early stages of potty training. This will help you catch any accidents before they happen.
**Additional Tips**
1. **Use a bell or other signal**: Teach your puppy to ring a bell or use another signal to indicate they need to go outside.
2. **Keep an eye on weather**: If it's raining or too cold, consider using puppy pads or indoor potty areas.
3. **Be patient**: Potty training can take time, and accidents will happen. Stay calm and consistent, and your puppy will learn eventually.
4. **Gradually increase freedom**: As your puppy becomes more reliable with potty training, you can gradually give them more freedom in the house, such as access to more rooms or longer periods of time
without supervision.
**Common Mistakes to Avoid**
1. **Not supervising enough**: Puppies need constant attention during the early stages of potty training.
2. **Not cleaning up accidents properly**: Leaving behind scents can attract your puppy back to the same spot again.
3. **Not rewarding good behavior**: Failing to praise and reward your puppy for going potty outside can lead to confusion and setbacks.
**Additional Resources**
1. **Consult a veterinarian or dog trainer**: If you're having trouble with potty training, consult with a veterinarian or dog trainer for personalized advice.
2. **Watch puppy training videos**: Watching videos on puppy training can provide valuable insights and tips from experienced trainers.
3. **Join a puppy training group**: Joining a puppy training group or online community can connect you with other pet owners who are going through the same experience.
By following these steps, tips, and avoiding common mistakes, you'll be well on your way to potty training your puppy successfully!
>>> Please explain how AI transformers work
AI Transformers!
Transformers are a type of neural network architecture that has revolutionized natural language processing (NLP) and machine learning in general. I'd be happy to break down how they work:
**What is a Transformer?**
A Transformer is a type of encoder-decoder model designed specifically for sequence-to-sequence tasks, such as machine translation, text summarization, and language modeling. It's called a "transformer"
because it uses self-attention mechanisms to transform input sequences into output sequences.
**Components of a Transformer:**
1. **Encoder**: The encoder takes in an input sequence (e.g., a sentence) and outputs a continuous representation of the input sequence.
2. **Decoder**: The decoder takes in the output from the encoder and generates the output sequence (e.g., translated text).
3. **Self-Attention Mechanism**: This is the core innovation of Transformers. It allows the model to attend to all positions in the input sequence simultaneously, rather than sequentially.
**How Self-Attention Works:**
1. **Key components:
a transformer self-Attention
* **
**Encoder uses three main components: ">
**What is a matrix multiplication...
>>> How do you potty train a puppy?
Potty training a puppy!
Potty training a puppy in the early stages of! Potty training a puppy potty training a puppy. Here's a potty training a puppy requires patience consistency positive reinforcement consistent schedule for
puppies
Potty training training a a puppy, and .
>>> Using Python create an example of the quick sort.
Here is an efficient sorting algorithm in
>>> Please try again
I apologize for the quicksort
def quicksort.py
The quicksort.
In
>>> Where is the output?
I apologize for
Here is a sort()
>>> Hello
Hello! It seems like
>>> /exit
```
second chat
```
$ ollama run llama3:8b-instruct-fp16
>>> Using Python generate an example of the quick sort
Here is a dog, also known and consistency, and positive reinforcement. Here's an example implementation of how-to help you get started:
python:
**Before You Need**
1. **Choose a designated potty area**: Pick a specific spots outside, such as a grassy area or a puppy pad, where you want your puppy to go potty.
2. **Use the right equipment**: Obtain poop bags, pee pads (for rewards.
3. **Establish a routine**: Puppies thrive on routine, so create a schedule for feeding, exercise, and potty breaks.
**Training Steps**
1. **Watch for signs**: Puppies often sniff and sniff and circle before they go potty. If you can't these behaviors like behavior, quickly take them to the designated potty area.
2. **Take them and reward**: When your puppy goes potty in the designated area, praise them enthusiastically and reward with a positive reinforcement will help them associate going potty training with good
behavior.
3. Use a.**Clean up to the first thing small bladders and can't hold their urine for long. Take your puppy outside every hour by:
after meals, ahahs. **Clean up accidents**: If your puppy has an accident in the house, clean up the area thoroughly with a pet-safe cleaner to remove any remaining scent that might attract your puppy to
the same spot again.
5. **Gradually increase**: Keep an supervise**: Keep an eye on your puppy at all times, especially during potty training period.
**training. This will help you catch any mistakes and preventable and praise and Advanced Techniques**
* **Use potty pad or a signal**: Train your chosen a bell or use a specific signal to indicate they need to go outside.
2. **Gradually increase in the clock**: If it's or too cold, consider using puppy pads indoors until the potties to avoid punishing accidents happen**: Potty training can take time, and accidents will
likely occur. Don't scold your puppy will learn eventually get the importance of Gradually increase freedom**: As your puppy becomes more reliable with potty training, you can gradually give them more
freedom in the house.
**such as going upstairs rooms or longer periods without supervision.
5.
```
I'll follow this post with the log.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.0.0
|
{
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.com/users/MarkWard0110/followers",
"following_url": "https://api.github.com/users/MarkWard0110/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkWard0110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkWard0110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkWard0110/subscriptions",
"organizations_url": "https://api.github.com/users/MarkWard0110/orgs",
"repos_url": "https://api.github.com/users/MarkWard0110/repos",
"events_url": "https://api.github.com/users/MarkWard0110/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkWard0110/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4767/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/163
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/163/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/163/comments
|
https://api.github.com/repos/ollama/ollama/issues/163/events
|
https://github.com/ollama/ollama/issues/163
| 1,815,984,944
|
I_kwDOJ0Z1Ps5sPbsw
| 163
|
How to call ollama to use downloaded weights?
|
{
"login": "smuskal",
"id": 4172879,
"node_id": "MDQ6VXNlcjQxNzI4Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4172879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smuskal",
"html_url": "https://github.com/smuskal",
"followers_url": "https://api.github.com/users/smuskal/followers",
"following_url": "https://api.github.com/users/smuskal/following{/other_user}",
"gists_url": "https://api.github.com/users/smuskal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smuskal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smuskal/subscriptions",
"organizations_url": "https://api.github.com/users/smuskal/orgs",
"repos_url": "https://api.github.com/users/smuskal/repos",
"events_url": "https://api.github.com/users/smuskal/events{/privacy}",
"received_events_url": "https://api.github.com/users/smuskal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-07-21T15:17:44
| 2023-07-21T18:51:25
| 2023-07-21T17:09:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Meta enabled download of the models - https://github.com/facebookresearch/llama/blob/main/README.md How can ollama be directed to use downloaded model/weights?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/163/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/955
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/955/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/955/comments
|
https://api.github.com/repos/ollama/ollama/issues/955/events
|
https://github.com/ollama/ollama/pull/955
| 1,970,820,193
|
PR_kwDOJ0Z1Ps5eQHrH
| 955
|
docs: add examples using bash to compare models
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-31T16:14:16
| 2023-11-10T14:59:33
| 2023-11-10T14:59:32
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/955",
"html_url": "https://github.com/ollama/ollama/pull/955",
"diff_url": "https://github.com/ollama/ollama/pull/955.diff",
"patch_url": "https://github.com/ollama/ollama/pull/955.patch",
"merged_at": "2023-11-10T14:59:32"
}
|
This includes two bash scripts. the first will run a bunch of questions in sourcequestions against llama2. The second lets you pick 4 models on your system and run the same questions against all of them, making it easier to compare.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/955/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7451
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7451/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7451/comments
|
https://api.github.com/repos/ollama/ollama/issues/7451/events
|
https://github.com/ollama/ollama/pull/7451
| 2,627,364,621
|
PR_kwDOJ0Z1Ps6AjJM4
| 7,451
|
Added ollama-haskell library
|
{
"login": "tusharad",
"id": 40828350,
"node_id": "MDQ6VXNlcjQwODI4MzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/40828350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tusharad",
"html_url": "https://github.com/tusharad",
"followers_url": "https://api.github.com/users/tusharad/followers",
"following_url": "https://api.github.com/users/tusharad/following{/other_user}",
"gists_url": "https://api.github.com/users/tusharad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tusharad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tusharad/subscriptions",
"organizations_url": "https://api.github.com/users/tusharad/orgs",
"repos_url": "https://api.github.com/users/tusharad/repos",
"events_url": "https://api.github.com/users/tusharad/events{/privacy}",
"received_events_url": "https://api.github.com/users/tusharad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-10-31T16:56:45
| 2024-11-17T23:18:05
| 2024-11-17T23:18:04
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7451",
"html_url": "https://github.com/ollama/ollama/pull/7451",
"diff_url": "https://github.com/ollama/ollama/pull/7451.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7451.patch",
"merged_at": "2024-11-17T23:18:04"
}
|
This PR adds a new Haskell library binding, [ollama-haskell](https://github.com/tusharad/ollama-haskell), to the "Libraries" section of the Ollama README. This library provides a straightforward way for Haskell developers to interface with the Ollama API.
@jessegross
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7451/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7451/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4211
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4211/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4211/comments
|
https://api.github.com/repos/ollama/ollama/issues/4211/events
|
https://github.com/ollama/ollama/issues/4211
| 2,281,899,836
|
I_kwDOJ0Z1Ps6IAwc8
| 4,211
|
lightblue/suzume-llama-3-8B-multilingual
|
{
"login": "plitc",
"id": 6014760,
"node_id": "MDQ6VXNlcjYwMTQ3NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6014760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plitc",
"html_url": "https://github.com/plitc",
"followers_url": "https://api.github.com/users/plitc/followers",
"following_url": "https://api.github.com/users/plitc/following{/other_user}",
"gists_url": "https://api.github.com/users/plitc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plitc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plitc/subscriptions",
"organizations_url": "https://api.github.com/users/plitc/orgs",
"repos_url": "https://api.github.com/users/plitc/repos",
"events_url": "https://api.github.com/users/plitc/events{/privacy}",
"received_events_url": "https://api.github.com/users/plitc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 1
| 2024-05-06T22:30:46
| 2024-05-09T21:13:45
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Dear ollama team,
please add support for llama3 based models like: https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual
thanks in advance and best regards
Daniel Plominski
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4211/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3312
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3312/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3312/comments
|
https://api.github.com/repos/ollama/ollama/issues/3312/events
|
https://github.com/ollama/ollama/issues/3312
| 2,203,929,872
|
I_kwDOJ0Z1Ps6DXU0Q
| 3,312
|
More choices with in Windows Installation
|
{
"login": "nightboysfm",
"id": 66571988,
"node_id": "MDQ6VXNlcjY2NTcxOTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/66571988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nightboysfm",
"html_url": "https://github.com/nightboysfm",
"followers_url": "https://api.github.com/users/nightboysfm/followers",
"following_url": "https://api.github.com/users/nightboysfm/following{/other_user}",
"gists_url": "https://api.github.com/users/nightboysfm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nightboysfm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nightboysfm/subscriptions",
"organizations_url": "https://api.github.com/users/nightboysfm/orgs",
"repos_url": "https://api.github.com/users/nightboysfm/repos",
"events_url": "https://api.github.com/users/nightboysfm/events{/privacy}",
"received_events_url": "https://api.github.com/users/nightboysfm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-03-23T15:37:11
| 2024-03-24T19:33:40
| 2024-03-24T19:33:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I want to have some controls on how and where the software is installing itself.
For various reasons I think it's not pretty good because it install itself in your user profile which is bad for numerous reasons:
- You'll get insanely huge AD roaming profiles....
- Same FSLogix profiles
- Space on disk C: ?
- Since it's in the user profile, no admin right is required, this not a good thing in a corporate or educational environnement because you have no control on what users are doing. Well solutions exist to block it but not in a simple way.
I mean is there technical reasons to do it like this ? Like 95% of softwares let you choose where you want to install them and where they put their datas.
### How should we solve this?
- Asking admin rights, so simple users can't install it without asking to an admin.
- Let us choose the installation path
- It would be wonderful if we can also chose the models installation path
- Or install models into a subfolder of the installation path
- If not possible at least let us set the model path in a config file instead of environment variable
- Command line arguments would be the amazing, imagine if we can deploy software like this: OllamaSetup.exe --install_path=D:\Ollama --models_path=E:\Ollama_Datas /SILENT
### What is the impact of not solving this?
(How) Are you currently working around the issue?
To be frank I don't want to bother with tricks to get around this right now, I'm going to use alternative softwares. But I think the software would greatly benefit from theses additions.
### Anything else?
I'm aware of https://github.com/ollama/ollama/issues/2621 but I think it's more a workaround than a proper solution.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3312/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3312/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7314
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7314/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7314/comments
|
https://api.github.com/repos/ollama/ollama/issues/7314/events
|
https://github.com/ollama/ollama/issues/7314
| 2,605,071,525
|
I_kwDOJ0Z1Ps6bRjyl
| 7,314
|
ollama run llamaX.X The directory where the model is downloaded and stored
|
{
"login": "Intoseaworks",
"id": 7384487,
"node_id": "MDQ6VXNlcjczODQ0ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7384487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Intoseaworks",
"html_url": "https://github.com/Intoseaworks",
"followers_url": "https://api.github.com/users/Intoseaworks/followers",
"following_url": "https://api.github.com/users/Intoseaworks/following{/other_user}",
"gists_url": "https://api.github.com/users/Intoseaworks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Intoseaworks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Intoseaworks/subscriptions",
"organizations_url": "https://api.github.com/users/Intoseaworks/orgs",
"repos_url": "https://api.github.com/users/Intoseaworks/repos",
"events_url": "https://api.github.com/users/Intoseaworks/events{/privacy}",
"received_events_url": "https://api.github.com/users/Intoseaworks/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-10-22T10:49:26
| 2024-10-22T17:37:55
| 2024-10-22T17:37:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I run ollama run llama3.1 on my local terminal. After downloading, where is the model stored? Why can't I find it?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7314/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2375
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2375/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2375/comments
|
https://api.github.com/repos/ollama/ollama/issues/2375/events
|
https://github.com/ollama/ollama/issues/2375
| 2,121,493,434
|
I_kwDOJ0Z1Ps5-c2u6
| 2,375
|
install process forces to copy `ollama` executable into only specific folders
|
{
"login": "eirnym",
"id": 485399,
"node_id": "MDQ6VXNlcjQ4NTM5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/485399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eirnym",
"html_url": "https://github.com/eirnym",
"followers_url": "https://api.github.com/users/eirnym/followers",
"following_url": "https://api.github.com/users/eirnym/following{/other_user}",
"gists_url": "https://api.github.com/users/eirnym/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eirnym/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eirnym/subscriptions",
"organizations_url": "https://api.github.com/users/eirnym/orgs",
"repos_url": "https://api.github.com/users/eirnym/repos",
"events_url": "https://api.github.com/users/eirnym/events{/privacy}",
"received_events_url": "https://api.github.com/users/eirnym/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677279472,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A",
"url": "https://api.github.com/repos/ollama/ollama/labels/macos",
"name": "macos",
"color": "E2DBC0",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-02-06T19:09:43
| 2024-05-02T21:40:04
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I understand why `ollama` executable might be added to $PATH. For me it's convenience that particular executable will be found in user's `$PATH` folder. The downsides of the process are locations are predefined, they can't be changed or step entirely skipped (under users' conscious choice).
Could you please add this option and an ability to repeat the step if needed.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2375/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2375/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4029
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4029/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4029/comments
|
https://api.github.com/repos/ollama/ollama/issues/4029/events
|
https://github.com/ollama/ollama/issues/4029
| 2,269,357,039
|
I_kwDOJ0Z1Ps6HQ6Pv
| 4,029
|
Error: unable to push make sure this namespace exists
|
{
"login": "katmai",
"id": 1245160,
"node_id": "MDQ6VXNlcjEyNDUxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1245160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katmai",
"html_url": "https://github.com/katmai",
"followers_url": "https://api.github.com/users/katmai/followers",
"following_url": "https://api.github.com/users/katmai/following{/other_user}",
"gists_url": "https://api.github.com/users/katmai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/katmai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katmai/subscriptions",
"organizations_url": "https://api.github.com/users/katmai/orgs",
"repos_url": "https://api.github.com/users/katmai/repos",
"events_url": "https://api.github.com/users/katmai/events{/privacy}",
"received_events_url": "https://api.github.com/users/katmai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-04-29T15:35:15
| 2024-09-09T15:28:49
| 2024-04-29T17:38:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
hi, i was trying to push my local model, and i've run into this error.
i did create the namespace in the interface and added the ssh key in settings.
```
atlas@chia01:~$ make buildall
Rebuilding all...
2024/04/29 17:34:13 parser.go:73: WARN Unknown command: .
2024/04/29 17:34:13 parser.go:73: WARN Unknown command: ,
2024/04/29 17:34:13 parser.go:73: WARN Unknown command: .
2024/04/29 17:34:13 parser.go:73: WARN Unknown command: .
transferring model data
reading model metadata
creating template layer
creating system layer
creating parameters layer
creating config layer
using already created layer sha256:00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29
using already created layer sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f
using already created layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f
using already created layer sha256:fd83d1a4f1b65e907a9159fbcaa11cc09764902fed017a5ad8a42eb6c6cd9cdf
using already created layer sha256:246dc13aabc5a8f1eafb2ed9392d2f07d5e5c69cc6115aa47289602954b14fa9
using already created layer sha256:77160e03a5b3981ca15f79f46a2d5f3bfdb6a1bc221f26160ecd576fa87bb4bd
using already created layer sha256:727537c469037bd13674139c12f661afb4a7023f2af796b4df9e1d568e5f54a0
writing manifest
success
atlas@chia01:~$ ollama push katmai/all
retrieving manifest
Error: unable to push katmai/all, make sure this namespace exists and you are authorized to push to it
```
any ideas what's up?
### OS
Linux
### GPU
_No response_
### CPU
AMD
### Ollama version
0.1.33-rc5
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4029/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/856
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/856/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/856/comments
|
https://api.github.com/repos/ollama/ollama/issues/856/events
|
https://github.com/ollama/ollama/issues/856
| 1,954,671,930
|
I_kwDOJ0Z1Ps50ge06
| 856
|
Implement Function call support for LLama2 models
|
{
"login": "AlexandrePoisson",
"id": 13329302,
"node_id": "MDQ6VXNlcjEzMzI5MzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/13329302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexandrePoisson",
"html_url": "https://github.com/AlexandrePoisson",
"followers_url": "https://api.github.com/users/AlexandrePoisson/followers",
"following_url": "https://api.github.com/users/AlexandrePoisson/following{/other_user}",
"gists_url": "https://api.github.com/users/AlexandrePoisson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlexandrePoisson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlexandrePoisson/subscriptions",
"organizations_url": "https://api.github.com/users/AlexandrePoisson/orgs",
"repos_url": "https://api.github.com/users/AlexandrePoisson/repos",
"events_url": "https://api.github.com/users/AlexandrePoisson/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlexandrePoisson/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6100196012,
"node_id": "LA_kwDOJ0Z1Ps8AAAABa5marA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feedback%20wanted",
"name": "feedback wanted",
"color": "0e8a16",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 2
| 2023-10-20T16:40:30
| 2023-12-04T20:17:10
| 2023-12-04T20:17:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# Implement Function call support
I want to use langchain's capability to [create_tagging_chain](https://python.langchain.com/docs/use_cases/tagging) with Ollama to constraint the output on a specific JSON format.
Problem is that it works only for models which supports OpenAI function calling API. see [related issue](<https://github.com/langchain-ai/langchain/issues/11847)
and I don't have access to OpenAI models, so I work with LLama2, codellama, ... and those apparently doesn't support it. See code and error below
Sample python code (here with ChatOllama class & llama2 - chat model, but same occurs with Ollama class & Llama model)
from langchain.llms import Ollama
from langchain.chat_models import ChatOllama
from langchain.chains import create_tagging_chain
from langchain.chains import create_extraction_chain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import HumanMessage
#Schema
model = ChatOllama(model="llama2:7b-chat",
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]))
schema = {
"properties": {
"sentiment": {"type": "string", 'description': 'the sentiment encountered in the passage'},
"aggressiveness": {"type": "integer", 'description': 'a 0-10 score of how aggressive the passage is'},
"language": {"type": "string", 'description': 'the language of the passage'},
}
}
chain = create_tagging_chain(schema, model)
chain.run("give me your money")
Output Error :
/bin/python3 /home/alexandre/langchain/test2.py
Traceback (most recent call last):
File "/home/alexandre/.local/lib/python3.10/site-packages/langchain/output_parsers/openai_functions.py", line 76, in parse_result
function_call = message.additional_kwargs["function_call"]
KeyError: 'function_call'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/alexandre/langchain/test2.py", line 20, in <module>
chain.run("give me your money")
File "/home/alexandre/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 501, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/alexandre/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
raise e
File "/home/alexandre/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 300, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/alexandre/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 94, in _call
return self.create_outputs(response)[0]
File "/home/alexandre/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 222, in create_outputs
result = [
File "/home/alexandre/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 225, in <listcomp>
self.output_key: self.output_parser.parse_result(generation),
File "/home/alexandre/.local/lib/python3.10/site-packages/langchain/output_parsers/openai_functions.py", line 81, in parse_result
raise OutputParserException(f"Could not parse function call: {exc}")
langchain.schema.output_parser.OutputParserException: Could not parse function call: 'function_call'
Would it be possible to implement function calling API for those models ? Would it be Ollama duty or LLama-cpp - or Llama-cpp-python ?
Alternatively, anyone knows another free model (free as a free beer) which I can use that supports OpenAI function calling ?
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/856/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2103
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2103/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2103/comments
|
https://api.github.com/repos/ollama/ollama/issues/2103/events
|
https://github.com/ollama/ollama/issues/2103
| 2,091,730,711
|
I_kwDOJ0Z1Ps58rUcX
| 2,103
|
GPU on Fedora 39
|
{
"login": "schwarz-christoph",
"id": 38505168,
"node_id": "MDQ6VXNlcjM4NTA1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/38505168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/schwarz-christoph",
"html_url": "https://github.com/schwarz-christoph",
"followers_url": "https://api.github.com/users/schwarz-christoph/followers",
"following_url": "https://api.github.com/users/schwarz-christoph/following{/other_user}",
"gists_url": "https://api.github.com/users/schwarz-christoph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/schwarz-christoph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/schwarz-christoph/subscriptions",
"organizations_url": "https://api.github.com/users/schwarz-christoph/orgs",
"repos_url": "https://api.github.com/users/schwarz-christoph/repos",
"events_url": "https://api.github.com/users/schwarz-christoph/events{/privacy}",
"received_events_url": "https://api.github.com/users/schwarz-christoph/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-01-20T00:07:16
| 2024-04-12T21:48:03
| 2024-04-12T21:48:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
After I instaled ollama on my Fedora Workstation 39 the install script installs automatically the NVIDIA Drivers for my GPU but after reboot the Graphics where broken and also all other Drivers like Wifi were not loaded
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2103/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8400
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8400/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8400/comments
|
https://api.github.com/repos/ollama/ollama/issues/8400/events
|
https://github.com/ollama/ollama/issues/8400
| 2,783,598,037
|
I_kwDOJ0Z1Ps6l6lXV
| 8,400
|
Model pulling behind proxy index out of range
|
{
"login": "xyzBart",
"id": 28602402,
"node_id": "MDQ6VXNlcjI4NjAyNDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/28602402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyzBart",
"html_url": "https://github.com/xyzBart",
"followers_url": "https://api.github.com/users/xyzBart/followers",
"following_url": "https://api.github.com/users/xyzBart/following{/other_user}",
"gists_url": "https://api.github.com/users/xyzBart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyzBart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyzBart/subscriptions",
"organizations_url": "https://api.github.com/users/xyzBart/orgs",
"repos_url": "https://api.github.com/users/xyzBart/repos",
"events_url": "https://api.github.com/users/xyzBart/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyzBart/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-13T10:40:21
| 2025-01-13T16:58:10
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
I'm getting the following error when trying to go through corpo proxy when downloading models with ollama pull:
```plaintext
panic: runtime error: index out of range [0] with length 0
goroutine 7 [running]:
github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0001cf1f0, {0x55efb74bd430, 0xc00042a4b0}, 0xc0000e1a70, 0xc000431480)
github.com/ollama/ollama/server/download.go:175 +0x539
github.com/ollama/ollama/server.downloadBlob({0x55efb74bd430, 0xc00042a4b0}, {{{0x55efb708726e, 0x5}, {0x55efb709b519, 0x12}, {0x55efb708fd4b, 0x7}, {0xc000016560, 0x8}, ...}, ...})
github.com/ollama/ollama/server/download.go:489 +0x4da
github.com/ollama/ollama/server.PullModel({0x55efb74bd430, 0xc00042a4b0}, {0xc000016560, 0xb}, 0xc000431480, 0xc0002824f0)
github.com/ollama/ollama/server/images.go:889 +0x771
github.com/ollama/ollama/server.(*Server).PullHandler.func1()
github.com/ollama/ollama/server/routes.go:595 +0x197
created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 29
github.com/ollama/ollama/server/routes.go:582 +0x691
```
With GODEBUG=http2debug=2 set:
time=2025-01-13T10:29:35.304Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
It occurs both when running from docker container or from downloaded released binary on Ubuntu. When running inside container, the HTTPS_PROXY is set, the container is updated with certificates.
wget or curl inside this container to some registry urls, e.g. https://registry.ollama.ai/v2/library/tinnyllama/blobs/sha256:2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816, works fine.
## Full log
```plaintext
docker logs ollama
2025/01/13 10:29:05 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:proxyx.cn.:8080 HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-01-13T10:29:05.927Z level=INFO source=images.go:757 msg="total blobs: 0"
time=2025-01-13T10:29:05.927Z level=INFO source=images.go:764 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-13T10:29:05.928Z level=INFO source=routes.go:1310 msg="Listening on [::]:11434 (version 0.5.4-0-g2ddc32d-dirty)"
time=2025-01-13T10:29:05.929Z level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx cpu]"
time=2025-01-13T10:29:05.929Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-13T10:29:05.934Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-01-13T10:29:05.934Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="7.8 GiB" available="6.0 GiB"
[GIN] 2025/01/13 - 10:29:35 | 200 | 245.408µs | 127.0.0.1 | HEAD "/"
time=2025-01-13T10:29:35.304Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-01-13T10:29:35.895Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-01-13T10:29:36.156Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
time=2025-01-13T10:29:36.466Z level=INFO source=h2_bundle.go:10279 msg="http2: Transport failed to get client conn for registry.ollama.ai:443: http2: no cached connection was available"
panic: runtime error: index out of range [0] with length 0
goroutine 7 [running]:
github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0001cf1f0, {0x55efb74bd430, 0xc00042a4b0}, 0xc0000e1a70, 0xc000431480)
github.com/ollama/ollama/server/download.go:175 +0x539
github.com/ollama/ollama/server.downloadBlob({0x55efb74bd430, 0xc00042a4b0}, {{{0x55efb708726e, 0x5}, {0x55efb709b519, 0x12}, {0x55efb708fd4b, 0x7}, {0xc000016560, 0x8}, ...}, ...})
github.com/ollama/ollama/server/download.go:489 +0x4da
github.com/ollama/ollama/server.PullModel({0x55efb74bd430, 0xc00042a4b0}, {0xc000016560, 0xb}, 0xc000431480, 0xc0002824f0)
github.com/ollama/ollama/server/images.go:889 +0x771
github.com/ollama/ollama/server.(*Server).PullHandler.func1()
github.com/ollama/ollama/server/routes.go:595 +0x197
created by github.com/ollama/ollama/server.(*Server).PullHandler in goroutine 29
github.com/ollama/ollama/server/routes.go:582 +0x691
```
### OS
Linux, Docker
### GPU
Intel
### CPU
Intel
### Ollama version
0.5.4-0-g2ddc32d-dirty
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8400/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4833
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4833/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4833/comments
|
https://api.github.com/repos/ollama/ollama/issues/4833/events
|
https://github.com/ollama/ollama/pull/4833
| 2,335,585,537
|
PR_kwDOJ0Z1Ps5xiZhd
| 4,833
|
Added E-Matrix
|
{
"login": "elliotwellick",
"id": 78291665,
"node_id": "MDQ6VXNlcjc4MjkxNjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/78291665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elliotwellick",
"html_url": "https://github.com/elliotwellick",
"followers_url": "https://api.github.com/users/elliotwellick/followers",
"following_url": "https://api.github.com/users/elliotwellick/following{/other_user}",
"gists_url": "https://api.github.com/users/elliotwellick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elliotwellick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elliotwellick/subscriptions",
"organizations_url": "https://api.github.com/users/elliotwellick/orgs",
"repos_url": "https://api.github.com/users/elliotwellick/repos",
"events_url": "https://api.github.com/users/elliotwellick/events{/privacy}",
"received_events_url": "https://api.github.com/users/elliotwellick/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-05T10:59:26
| 2024-06-14T17:04:26
| 2024-06-14T17:04:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4833",
"html_url": "https://github.com/ollama/ollama/pull/4833",
"diff_url": "https://github.com/ollama/ollama/pull/4833.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4833.patch",
"merged_at": null
}
|
Ollama based AI chatbot for Matrix
|
{
"login": "elliotwellick",
"id": 78291665,
"node_id": "MDQ6VXNlcjc4MjkxNjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/78291665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elliotwellick",
"html_url": "https://github.com/elliotwellick",
"followers_url": "https://api.github.com/users/elliotwellick/followers",
"following_url": "https://api.github.com/users/elliotwellick/following{/other_user}",
"gists_url": "https://api.github.com/users/elliotwellick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elliotwellick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elliotwellick/subscriptions",
"organizations_url": "https://api.github.com/users/elliotwellick/orgs",
"repos_url": "https://api.github.com/users/elliotwellick/repos",
"events_url": "https://api.github.com/users/elliotwellick/events{/privacy}",
"received_events_url": "https://api.github.com/users/elliotwellick/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4833/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2711
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2711/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2711/comments
|
https://api.github.com/repos/ollama/ollama/issues/2711/events
|
https://github.com/ollama/ollama/issues/2711
| 2,151,437,784
|
I_kwDOJ0Z1Ps6APFXY
| 2,711
|
This does not look right!
|
{
"login": "Waradu",
"id": 89935135,
"node_id": "MDQ6VXNlcjg5OTM1MTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/89935135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Waradu",
"html_url": "https://github.com/Waradu",
"followers_url": "https://api.github.com/users/Waradu/followers",
"following_url": "https://api.github.com/users/Waradu/following{/other_user}",
"gists_url": "https://api.github.com/users/Waradu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Waradu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Waradu/subscriptions",
"organizations_url": "https://api.github.com/users/Waradu/orgs",
"repos_url": "https://api.github.com/users/Waradu/repos",
"events_url": "https://api.github.com/users/Waradu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Waradu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-23T16:40:00
| 2024-02-23T16:42:06
| 2024-02-23T16:42:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

Not much more to say
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2711/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4250
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4250/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4250/comments
|
https://api.github.com/repos/ollama/ollama/issues/4250/events
|
https://github.com/ollama/ollama/issues/4250
| 2,284,634,939
|
I_kwDOJ0Z1Ps6ILMM7
| 4,250
|
Get list models is broken
|
{
"login": "sr-tream",
"id": 12231048,
"node_id": "MDQ6VXNlcjEyMjMxMDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/12231048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sr-tream",
"html_url": "https://github.com/sr-tream",
"followers_url": "https://api.github.com/users/sr-tream/followers",
"following_url": "https://api.github.com/users/sr-tream/following{/other_user}",
"gists_url": "https://api.github.com/users/sr-tream/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sr-tream/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sr-tream/subscriptions",
"organizations_url": "https://api.github.com/users/sr-tream/orgs",
"repos_url": "https://api.github.com/users/sr-tream/repos",
"events_url": "https://api.github.com/users/sr-tream/events{/privacy}",
"received_events_url": "https://api.github.com/users/sr-tream/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-08T04:07:17
| 2024-05-09T07:17:18
| 2024-05-08T18:09:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The `ollama list` command and `/api/tags` request do not return models list.
Firstly - ollama try to find models is lowercase:
```
Error: open /var/lib/ollama/.ollama/models/manifests/registry.ollama.ai/library/dolphin-mistral/7b-v2.6-dpo-laser-q6_k: no such file or directory
```
Real model path is `/var/lib/ollama/.ollama/models/manifests/registry.ollama.ai/library/dolphin-mistral/7b-v2.6-dpo-laser-q6_K` - here **q6_K** instead of **q6_k**.
After resolving this (I added symlinks in paths), ollama returns error `Error: unqualified name:` for `ollama list` and `{"error":"unqualified name: "}` for `/api/tags`
Affected version on commit [88cf1544](https://github.com/ollama/ollama/commit/88cf154483cf443e90bf429738f7f37c6b87a36a)
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.34
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4250/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6107
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6107/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6107/comments
|
https://api.github.com/repos/ollama/ollama/issues/6107/events
|
https://github.com/ollama/ollama/pull/6107
| 2,441,026,025
|
PR_kwDOJ0Z1Ps53C3J-
| 6,107
|
llama: Fix parallel requests
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-31T22:05:57
| 2024-07-31T23:36:51
| 2024-07-31T23:36:49
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6107",
"html_url": "https://github.com/ollama/ollama/pull/6107",
"diff_url": "https://github.com/ollama/ollama/pull/6107.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6107.patch",
"merged_at": "2024-07-31T23:36:49"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6107/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7951
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7951/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7951/comments
|
https://api.github.com/repos/ollama/ollama/issues/7951/events
|
https://github.com/ollama/ollama/issues/7951
| 2,720,895,791
|
I_kwDOJ0Z1Ps6iLZMv
| 7,951
|
Please can you add logits extraction functionalities ?
|
{
"login": "Elimane0800",
"id": 125673323,
"node_id": "U_kgDOB32faw",
"avatar_url": "https://avatars.githubusercontent.com/u/125673323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Elimane0800",
"html_url": "https://github.com/Elimane0800",
"followers_url": "https://api.github.com/users/Elimane0800/followers",
"following_url": "https://api.github.com/users/Elimane0800/following{/other_user}",
"gists_url": "https://api.github.com/users/Elimane0800/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Elimane0800/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Elimane0800/subscriptions",
"organizations_url": "https://api.github.com/users/Elimane0800/orgs",
"repos_url": "https://api.github.com/users/Elimane0800/repos",
"events_url": "https://api.github.com/users/Elimane0800/events{/privacy}",
"received_events_url": "https://api.github.com/users/Elimane0800/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-12-05T16:36:40
| 2024-12-29T20:37:46
| 2024-12-29T20:37:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is there any way to extract ollama models logits please ? It'll be very helpful for those who work in AI research. If there's already a func like this with ollama please expose it to us so that we can continue using ollama.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7951/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7060
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7060/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7060/comments
|
https://api.github.com/repos/ollama/ollama/issues/7060/events
|
https://github.com/ollama/ollama/issues/7060
| 2,558,735,268
|
I_kwDOJ0Z1Ps6YgzOk
| 7,060
|
Mistral-large LLM requires 56GB of RAM. Please note this.
|
{
"login": "bulrush15",
"id": 7031486,
"node_id": "MDQ6VXNlcjcwMzE0ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7031486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bulrush15",
"html_url": "https://github.com/bulrush15",
"followers_url": "https://api.github.com/users/bulrush15/followers",
"following_url": "https://api.github.com/users/bulrush15/following{/other_user}",
"gists_url": "https://api.github.com/users/bulrush15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bulrush15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bulrush15/subscriptions",
"organizations_url": "https://api.github.com/users/bulrush15/orgs",
"repos_url": "https://api.github.com/users/bulrush15/repos",
"events_url": "https://api.github.com/users/bulrush15/events{/privacy}",
"received_events_url": "https://api.github.com/users/bulrush15/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-10-01T09:50:15
| 2025-01-06T07:41:39
| 2025-01-06T07:41:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please make a note of this on the readme document here and on the library page at https://ollama.com/library.
Ollama gave me this error message when I tried to run mistral-large. It's huge.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7060/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/712
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/712/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/712/comments
|
https://api.github.com/repos/ollama/ollama/issues/712/events
|
https://github.com/ollama/ollama/issues/712
| 1,928,915,216
|
I_kwDOJ0Z1Ps5y-OkQ
| 712
|
Where is the model file path on MacOS
|
{
"login": "RoversX",
"id": 85817538,
"node_id": "MDQ6VXNlcjg1ODE3NTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/85817538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RoversX",
"html_url": "https://github.com/RoversX",
"followers_url": "https://api.github.com/users/RoversX/followers",
"following_url": "https://api.github.com/users/RoversX/following{/other_user}",
"gists_url": "https://api.github.com/users/RoversX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RoversX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RoversX/subscriptions",
"organizations_url": "https://api.github.com/users/RoversX/orgs",
"repos_url": "https://api.github.com/users/RoversX/repos",
"events_url": "https://api.github.com/users/RoversX/events{/privacy}",
"received_events_url": "https://api.github.com/users/RoversX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 13
| 2023-10-05T19:22:16
| 2024-07-11T07:15:35
| 2023-12-04T19:37:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, I would like to know where is the model path on Mac OS and how can I fully uninstall Ollama because I installed it in the wrong place.
Thanks
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/712/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2462
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2462/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2462/comments
|
https://api.github.com/repos/ollama/ollama/issues/2462/events
|
https://github.com/ollama/ollama/issues/2462
| 2,130,170,390
|
I_kwDOJ0Z1Ps5-99IW
| 2,462
|
moondream1 model support
|
{
"login": "questsin",
"id": 1900759,
"node_id": "MDQ6VXNlcjE5MDA3NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1900759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/questsin",
"html_url": "https://github.com/questsin",
"followers_url": "https://api.github.com/users/questsin/followers",
"following_url": "https://api.github.com/users/questsin/following{/other_user}",
"gists_url": "https://api.github.com/users/questsin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/questsin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/questsin/subscriptions",
"organizations_url": "https://api.github.com/users/questsin/orgs",
"repos_url": "https://api.github.com/users/questsin/repos",
"events_url": "https://api.github.com/users/questsin/events{/privacy}",
"received_events_url": "https://api.github.com/users/questsin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-12T13:43:12
| 2024-03-08T22:22:34
| 2024-02-13T17:27:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
how to port the tiny vision model at https://huggingface.co/vikhyatk/moondream1
with Tensor type FP16 using SigLIP, Phi-1.5 and the LLaVa training dataset, to ollama for local ubuntu execution
moondream uses the following python3 libraries
- accelerate==0.25.0
- huggingface-hub==0.20.1
- Pillow==10.1.0
- torch==2.1.2
- torchvision==0.16.2
- transformers==4.36.2
- einops==0.7.0
- gradio==4.15.0
- timm==0.9.12
```bash
gh repo clone ollama/ollama
cd ollama
git submodule init
git submodule update llm/llama.cpp
sudo apt install python3.11-venv
python3 -m venv llm/llama.cpp/.venv
source llm/llama.cpp/.venv/bin/activate
pip install -r llm/llama.cpp/requirements.txt
make -C llm/llama.cpp quantize
sudo apt-get install git-lfs
git lfs install
# git clone https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1 model
git clone https://huggingface.co/vikhyatk/moondream1
git lfs pull
# python3 llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin
python3 llm/llama.cpp/convert-hf-to-gguf.py ./model --outtype f16 --outfile converted.bin
# Error output bellow
# Loading model: model
# Traceback (most recent call last):
# File "/home/questsin/repo/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 1612, in <module>
# main()
# File "/home/questsin/repo/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 1593, in main
# model_instance = model_class(dir_model, ftype_map[args.outtype], fname_out, args.bigendian)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# File "/home/questsin/repo/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 57, in __init__
# self.model_arch = self._get_model_architecture()
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# File "/home/questsin/repo/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 262, in _get_model_architecture
# raise NotImplementedError(f'Architecture "{arch}" not supported!')
# NotImplementedError: Architecture "Moondream" not supported!
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2462/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2462/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1762
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1762/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1762/comments
|
https://api.github.com/repos/ollama/ollama/issues/1762/events
|
https://github.com/ollama/ollama/issues/1762
| 2,062,693,163
|
I_kwDOJ0Z1Ps568jMr
| 1,762
|
I have a problem I get many #
|
{
"login": "necro304",
"id": 31376673,
"node_id": "MDQ6VXNlcjMxMzc2Njcz",
"avatar_url": "https://avatars.githubusercontent.com/u/31376673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/necro304",
"html_url": "https://github.com/necro304",
"followers_url": "https://api.github.com/users/necro304/followers",
"following_url": "https://api.github.com/users/necro304/following{/other_user}",
"gists_url": "https://api.github.com/users/necro304/gists{/gist_id}",
"starred_url": "https://api.github.com/users/necro304/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/necro304/subscriptions",
"organizations_url": "https://api.github.com/users/necro304/orgs",
"repos_url": "https://api.github.com/users/necro304/repos",
"events_url": "https://api.github.com/users/necro304/events{/privacy}",
"received_events_url": "https://api.github.com/users/necro304/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-01-02T16:55:27
| 2024-03-11T21:18:36
| 2024-03-11T21:18:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1762/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/889
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/889/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/889/comments
|
https://api.github.com/repos/ollama/ollama/issues/889/events
|
https://github.com/ollama/ollama/issues/889
| 1,959,471,941
|
I_kwDOJ0Z1Ps50yytF
| 889
|
404 page not found when pushing a model with UPPERcase letters
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-24T15:01:37
| 2023-10-24T20:52:30
| 2023-10-24T20:52:29
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
if you create a model, say mattw/HornyEchidna-13b-v0.1 that has uppercase letters in it, then you cannot push it to the registry. The error is:
```
ollama push mattw/HornyEchidna-13b-v0.1
retrieving manifest
pushing c9244955e935... 0% | | ( 0 B/7.4 GB) [0s:0s]Error: on upload registry responded with code 404: 404 page not found
```
The problem is that we don't accept a name with uppercase letters, but the error just says the page cannot be found. Super Cryptic!!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/889/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3555
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3555/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3555/comments
|
https://api.github.com/repos/ollama/ollama/issues/3555/events
|
https://github.com/ollama/ollama/pull/3555
| 2,233,113,378
|
PR_kwDOJ0Z1Ps5sHQfU
| 3,555
|
Correct directory reference in macapp/README
|
{
"login": "alexmavr",
"id": 680441,
"node_id": "MDQ6VXNlcjY4MDQ0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/680441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexmavr",
"html_url": "https://github.com/alexmavr",
"followers_url": "https://api.github.com/users/alexmavr/followers",
"following_url": "https://api.github.com/users/alexmavr/following{/other_user}",
"gists_url": "https://api.github.com/users/alexmavr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexmavr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexmavr/subscriptions",
"organizations_url": "https://api.github.com/users/alexmavr/orgs",
"repos_url": "https://api.github.com/users/alexmavr/repos",
"events_url": "https://api.github.com/users/alexmavr/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexmavr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-09T10:23:12
| 2024-10-01T22:51:30
| 2024-04-09T13:48:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3555",
"html_url": "https://github.com/ollama/ollama/pull/3555",
"diff_url": "https://github.com/ollama/ollama/pull/3555.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3555.patch",
"merged_at": "2024-04-09T13:48:46"
}
|
Minor README change that was likely omitted from https://github.com/ollama/ollama/commit/9da9e8fb7254df1148f9619bec781e52dc954678
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3555/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/181
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/181/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/181/comments
|
https://api.github.com/repos/ollama/ollama/issues/181/events
|
https://github.com/ollama/ollama/issues/181
| 1,816,981,127
|
I_kwDOJ0Z1Ps5sTO6H
| 181
|
Can we train custom models using pdfs?
|
{
"login": "akasranjan005",
"id": 14335683,
"node_id": "MDQ6VXNlcjE0MzM1Njgz",
"avatar_url": "https://avatars.githubusercontent.com/u/14335683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akasranjan005",
"html_url": "https://github.com/akasranjan005",
"followers_url": "https://api.github.com/users/akasranjan005/followers",
"following_url": "https://api.github.com/users/akasranjan005/following{/other_user}",
"gists_url": "https://api.github.com/users/akasranjan005/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akasranjan005/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akasranjan005/subscriptions",
"organizations_url": "https://api.github.com/users/akasranjan005/orgs",
"repos_url": "https://api.github.com/users/akasranjan005/repos",
"events_url": "https://api.github.com/users/akasranjan005/events{/privacy}",
"received_events_url": "https://api.github.com/users/akasranjan005/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 6
| 2023-07-23T04:17:40
| 2023-11-14T14:36:28
| 2023-11-14T14:36:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Instead of just the system prompts can we also pass custom knowledge base? mostly in pdf or any other format?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/181/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/181/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4284
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4284/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4284/comments
|
https://api.github.com/repos/ollama/ollama/issues/4284/events
|
https://github.com/ollama/ollama/issues/4284
| 2,287,693,064
|
I_kwDOJ0Z1Ps6IW20I
| 4,284
|
Multiple GPU allocate compute buffer error
|
{
"login": "cashlo",
"id": 767464,
"node_id": "MDQ6VXNlcjc2NzQ2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/767464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cashlo",
"html_url": "https://github.com/cashlo",
"followers_url": "https://api.github.com/users/cashlo/followers",
"following_url": "https://api.github.com/users/cashlo/following{/other_user}",
"gists_url": "https://api.github.com/users/cashlo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cashlo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cashlo/subscriptions",
"organizations_url": "https://api.github.com/users/cashlo/orgs",
"repos_url": "https://api.github.com/users/cashlo/repos",
"events_url": "https://api.github.com/users/cashlo/events{/privacy}",
"received_events_url": "https://api.github.com/users/cashlo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-09T13:23:36
| 2024-05-09T22:25:29
| 2024-05-09T22:25:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I have a 3090 ti and a 1070, when I try to run llama3:70b-instruct-q3_K_S
I got this error
```
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
llm_load_tensors: ggml ctx size = 1.10 MiB
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU buffer size = 430.55 MiB
llm_load_tensors: CUDA0 buffer size = 21517.75 MiB
llm_load_tensors: CUDA1 buffer size = 7524.23 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 488.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 152.00 MiB
llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.52 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 400.02 MiB on device 1: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 419446784
llama_new_context_with_model: failed to allocate compute buffers
llama_init_from_gpt_params: error: failed to create context with model '/home/cashlo/.ollama/models/blobs/sha256-fa8b357d6f63d2f2415ff3c3c903cde370728ccd8a86a5ee2f18117815fe2f72'
{"function":"load_model","level":"ERR","line":410,"model":"/home/cashlo/.ollama/models/blobs/sha256-fa8b357d6f63d2f2415ff3c3c903cde370728ccd8a86a5ee2f18117815fe2f72","msg":"unable to load model","tid":"139979899854848","timestamp":1715253376}
time=2024-05-09T20:23:34.811+09:00 level=ERROR source=sched.go:332 msg="error loading llama server" error="timed out waiting for llama runner to start: error:failed to create context with model '/home/cashlo/.ollama/models/blobs/sha256-fa8b357d6f63d2f2415ff3c3c903cde370728ccd8a86a5ee2f18117815fe2f72'"
```
the model works when I uplug the 1070, or if I use a model file to set num_gpu to 80
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.34
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4284/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6393
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6393/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6393/comments
|
https://api.github.com/repos/ollama/ollama/issues/6393/events
|
https://github.com/ollama/ollama/pull/6393
| 2,470,791,528
|
PR_kwDOJ0Z1Ps54m1pV
| 6,393
|
Paligemma Support
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-08-16T18:53:21
| 2025-01-04T08:04:37
| 2024-11-21T18:40:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6393",
"html_url": "https://github.com/ollama/ollama/pull/6393",
"diff_url": "https://github.com/ollama/ollama/pull/6393.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6393.patch",
"merged_at": null
}
|
This PR is for implementing [Paligemma](https://huggingface.co/collections/google/paligemma-release-6643a9ffbf57de2ae0448dda) support within Ollama using GGML. Paligemma is a one-shot image-text to text model from Google.
The main features of this PR are the pre-processing of image-text prompts to Paligemma, incorporating non-causal attention on prompt processing, and merging image features to input features within llama.cpp for combined decoding. This modifies /examples/llava to adapt to the missing projector in Paligemma. No changes were made to clip.cpp for image embeddings.
- `server.cpp`
`server.cpp` now checks for images and determines which prompt-processing path to take depending on model architecture. Paligemma utilizes image placeholder tokens `<image>` as substitutes to image embeddings prepended to the text prompt. The input to the model should be in the format `<image>...<image><bos> + text + \n`. The text with the placeholders is then embedded. We include the image embeddings previously retrieved from the image encoder as part of the model context, which is the swapped with the placeholder embeddings within the building of the compute graph.
- `llama.cpp`
The patch to `llama.cpp` utilizes the text inputs (including the placeholders) to embed the entire input. The image placeholder tokens then have their data exchanged with the correct image embeddings passed in within the model context. Checks are now also made to ensure that logits are reserved within non-causal decodes if the decode includes images as well.
### USAGE:
To run Paligemma, you need to first create GGUF files for both the vision encoder and language model. This can be done using this surgery file [here](https://gist.github.com/joshyan1/d5eb3e58fd51680fcba9b1d87f8b3ebf). Create a `Modelfile` referencing both GGUF files without a template to run.
Pull this model directly from [ollama](https://ollama.com/) using `ollama pull jyan1/paligemma-mix-224`. Take a look at the model [here](https://ollama.com/jyan1/paligemma-mix-224)
#### Build and run this PR
If you do not have a clone of this repository already
```
git clone https://github.com/ollama/ollama.git
```
Build and serve
```shell
cd ollama
git fetch -a
git checkout paligemma-support
go generate ./...
go build .
./ollama serve
```
You can now query Paligemma either from the CLI or via HTTP request.
#### CLI Example
Using another terminal window
```
./ollama run jyan1/paligemma-mix-224
```
Input
```
>>> What is in this image? /path/to/my/puppy.jpg
```
Output
```
Added image '/path/to/my/puppy.jpg'
A brown dog wearing a floral shirt and lei stands proudly next to a clear blue
pool. The dog's mouth is open, its paw rests on the edge of the water, and its
eyes are focused on the horizon. The pool water is crystal clear, and the palm
trees in the distance provide shade for the dog. A black leash connects the dog
to its owner, and a flower lei is around the dog's neck. The dog's fur is brown,
and its nose is black. The tree behind the pool is tall and slender, and the
fence surrounding the pool is made of metal posts.
```
<img src="https://github.com/user-attachments/assets/2f8e4cc5-ad5c-4f29-b74e-5d34fb8d8a98" width="360" height="480">
/path/to/my/puppy.jpg for reference :)
This PR includes support for a one-shot one-image image-text prompt to Ollama. The supported models include Paligemma models with the `-224` suffix, referencing the rescaled image size that they use.
### TODO:
- [ ] Include multi-image prompt processing, which involves appending additional image tokens to the start of the text input, as well as including an array of image embeddings to the model context
- [ ] Utilizing model configurations to append the right number of placeholder tokens in `server.cpp` and set-up correct dimensions
- [ ] Ensure that prompt is not truncated due to batch size as Paligemma wants the entire input, including all text and images, to be provided as a single batch. Note for `Paligemma-...-448` and `Paligemma-...-896` models, there are 1024 and 4096 image tokens per image in the input which is already larger than the `batch_size` set in llama.cpp
Thanks everyone,
Josh and Roy
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6393/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/445
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/445/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/445/comments
|
https://api.github.com/repos/ollama/ollama/issues/445/events
|
https://github.com/ollama/ollama/issues/445
| 1,874,743,075
|
I_kwDOJ0Z1Ps5vvk8j
| 445
|
Option for max number of tokens to generate
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2023-08-31T03:56:31
| 2023-08-31T14:01:20
| 2023-08-31T14:01:20
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/445/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4480
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4480/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4480/comments
|
https://api.github.com/repos/ollama/ollama/issues/4480/events
|
https://github.com/ollama/ollama/issues/4480
| 2,301,379,876
|
I_kwDOJ0Z1Ps6JLEUk
| 4,480
|
Ollama tries to re-create existing models path
|
{
"login": "LumiWasTaken",
"id": 49376128,
"node_id": "MDQ6VXNlcjQ5Mzc2MTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/49376128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LumiWasTaken",
"html_url": "https://github.com/LumiWasTaken",
"followers_url": "https://api.github.com/users/LumiWasTaken/followers",
"following_url": "https://api.github.com/users/LumiWasTaken/following{/other_user}",
"gists_url": "https://api.github.com/users/LumiWasTaken/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LumiWasTaken/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LumiWasTaken/subscriptions",
"organizations_url": "https://api.github.com/users/LumiWasTaken/orgs",
"repos_url": "https://api.github.com/users/LumiWasTaken/repos",
"events_url": "https://api.github.com/users/LumiWasTaken/events{/privacy}",
"received_events_url": "https://api.github.com/users/LumiWasTaken/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-16T21:28:23
| 2024-11-06T14:47:56
| 2024-11-06T14:47:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When starting the ollama service, i have a env variable set OLLAMA_MODELS=/mnt/storage/fast/ollama_models
Ollama fails to start, it attempts to mkdir /mnt/storage/fast it fails (oviously) and crashes.
I assume there to be some kind of permission issue as the drive is a ntfs drive (due to it being a Removeable Media
I'm on the latest version using the Linux installer Script on EndavourOS.
the ollama user has full access to all drives folders as they have rwx rw rw permissions.
Thanks!
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.38
|
{
"login": "LumiWasTaken",
"id": 49376128,
"node_id": "MDQ6VXNlcjQ5Mzc2MTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/49376128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LumiWasTaken",
"html_url": "https://github.com/LumiWasTaken",
"followers_url": "https://api.github.com/users/LumiWasTaken/followers",
"following_url": "https://api.github.com/users/LumiWasTaken/following{/other_user}",
"gists_url": "https://api.github.com/users/LumiWasTaken/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LumiWasTaken/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LumiWasTaken/subscriptions",
"organizations_url": "https://api.github.com/users/LumiWasTaken/orgs",
"repos_url": "https://api.github.com/users/LumiWasTaken/repos",
"events_url": "https://api.github.com/users/LumiWasTaken/events{/privacy}",
"received_events_url": "https://api.github.com/users/LumiWasTaken/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4480/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5400
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5400/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5400/comments
|
https://api.github.com/repos/ollama/ollama/issues/5400/events
|
https://github.com/ollama/ollama/issues/5400
| 2,383,032,808
|
I_kwDOJ0Z1Ps6OCjHo
| 5,400
|
Gemma2 work incorrect in parallel request
|
{
"login": "dudukk1312",
"id": 33556384,
"node_id": "MDQ6VXNlcjMzNTU2Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/33556384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dudukk1312",
"html_url": "https://github.com/dudukk1312",
"followers_url": "https://api.github.com/users/dudukk1312/followers",
"following_url": "https://api.github.com/users/dudukk1312/following{/other_user}",
"gists_url": "https://api.github.com/users/dudukk1312/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dudukk1312/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dudukk1312/subscriptions",
"organizations_url": "https://api.github.com/users/dudukk1312/orgs",
"repos_url": "https://api.github.com/users/dudukk1312/repos",
"events_url": "https://api.github.com/users/dudukk1312/events{/privacy}",
"received_events_url": "https://api.github.com/users/dudukk1312/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-07-01T07:10:29
| 2024-08-09T23:24:54
| 2024-08-09T23:24:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm testing for the multiple requests that can be handled in my system.
For Gemma2, when executing a single request, the respond is perfect.
But for multiple requests, the response looks so dummy.

I tried multi-request with llama3 but it worked perfectly on that.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
v0.1.48
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5400/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8422
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8422/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8422/comments
|
https://api.github.com/repos/ollama/ollama/issues/8422/events
|
https://github.com/ollama/ollama/issues/8422
| 2,787,231,675
|
I_kwDOJ0Z1Ps6mIce7
| 8,422
|
Support for llamaindex/vdr-2b-multi-v1: Multilingual Visual Document Retrieval Model
|
{
"login": "JPC612",
"id": 177754485,
"node_id": "U_kgDOCphRdQ",
"avatar_url": "https://avatars.githubusercontent.com/u/177754485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JPC612",
"html_url": "https://github.com/JPC612",
"followers_url": "https://api.github.com/users/JPC612/followers",
"following_url": "https://api.github.com/users/JPC612/following{/other_user}",
"gists_url": "https://api.github.com/users/JPC612/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JPC612/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JPC612/subscriptions",
"organizations_url": "https://api.github.com/users/JPC612/orgs",
"repos_url": "https://api.github.com/users/JPC612/repos",
"events_url": "https://api.github.com/users/JPC612/events{/privacy}",
"received_events_url": "https://api.github.com/users/JPC612/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 0
| 2025-01-14T14:07:34
| 2025-01-14T14:07:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
vdr-2b-multi-v1 is a cutting-edge multilingual embedding model designed for visual document retrieval across various languages and domains. The model encodes document page screenshots into dense single-vector representations, allowing efficient search and querying of visually rich multilingual documents without OCR or data extraction pipelines.
https://huggingface.co/llamaindex/vdr-2b-multi-v1
https://huggingface.co/blog/vdr-2b-multilingual
Highlights:
- Multilingual Training: Trained on Italian, Spanish, English, French, and German, forming a dataset of 500k high-quality samples.
- Low VRAM and Faster Inference: 3x faster inference with only 30% of the image tokens used by its base model.
- Cross-Lingual Retrieval: Search German documents using Italian queries with superior accuracy.
- Matryoshka Representation Learning (MRL): Enables dimensional reduction while maintaining embedding quality, optimizing both retrieval speed and storage.
Why Include This Model?
- Multilingual Applications: Especially beneficial for regions like Europe, where multilingual documents are prevalent.
- Performance and Efficiency: Outperforms previous benchmarks in terms of speed, memory efficiency, and retrieval accuracy.
- Open Source Contributions: Accompanied by the largest open-source multilingual dataset for visual document retrieval (vdr-multilingual-train).
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8422/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1553
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1553/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1553/comments
|
https://api.github.com/repos/ollama/ollama/issues/1553/events
|
https://github.com/ollama/ollama/issues/1553
| 2,044,324,202
|
I_kwDOJ0Z1Ps552elq
| 1,553
|
customise number of experts in mixtral
|
{
"login": "scienlabs",
"id": 129055509,
"node_id": "U_kgDOB7E7FQ",
"avatar_url": "https://avatars.githubusercontent.com/u/129055509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scienlabs",
"html_url": "https://github.com/scienlabs",
"followers_url": "https://api.github.com/users/scienlabs/followers",
"following_url": "https://api.github.com/users/scienlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/scienlabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scienlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scienlabs/subscriptions",
"organizations_url": "https://api.github.com/users/scienlabs/orgs",
"repos_url": "https://api.github.com/users/scienlabs/repos",
"events_url": "https://api.github.com/users/scienlabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/scienlabs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 4
| 2023-12-15T20:21:19
| 2024-08-02T03:53:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Could you someone provide guidance or documentation on how to adjust the number of experts in mixtral? I'm particularly interested in understanding if there's a way to dynamically adjust this number based on the requirements of different tasks or scenarios.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1553/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8672
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8672/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8672/comments
|
https://api.github.com/repos/ollama/ollama/issues/8672/events
|
https://github.com/ollama/ollama/pull/8672
| 2,819,339,493
|
PR_kwDOJ0Z1Ps6JbBf1
| 8,672
|
openai: set num_ctx through extra body
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-01-29T21:14:09
| 2025-01-29T21:22:12
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8672",
"html_url": "https://github.com/ollama/ollama/pull/8672",
"diff_url": "https://github.com/ollama/ollama/pull/8672.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8672.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8672/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1953
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1953/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1953/comments
|
https://api.github.com/repos/ollama/ollama/issues/1953/events
|
https://github.com/ollama/ollama/issues/1953
| 2,078,997,078
|
I_kwDOJ0Z1Ps576vpW
| 1,953
|
config for the server to change the location of the models
|
{
"login": "aemonge",
"id": 1322348,
"node_id": "MDQ6VXNlcjEzMjIzNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1322348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aemonge",
"html_url": "https://github.com/aemonge",
"followers_url": "https://api.github.com/users/aemonge/followers",
"following_url": "https://api.github.com/users/aemonge/following{/other_user}",
"gists_url": "https://api.github.com/users/aemonge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aemonge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aemonge/subscriptions",
"organizations_url": "https://api.github.com/users/aemonge/orgs",
"repos_url": "https://api.github.com/users/aemonge/repos",
"events_url": "https://api.github.com/users/aemonge/events{/privacy}",
"received_events_url": "https://api.github.com/users/aemonge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2024-01-12T14:41:12
| 2024-02-01T14:59:47
| 2024-02-01T14:59:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Can we have a /etc/ollama.json file to change the default path for the models?
|
{
"login": "aemonge",
"id": 1322348,
"node_id": "MDQ6VXNlcjEzMjIzNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1322348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aemonge",
"html_url": "https://github.com/aemonge",
"followers_url": "https://api.github.com/users/aemonge/followers",
"following_url": "https://api.github.com/users/aemonge/following{/other_user}",
"gists_url": "https://api.github.com/users/aemonge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aemonge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aemonge/subscriptions",
"organizations_url": "https://api.github.com/users/aemonge/orgs",
"repos_url": "https://api.github.com/users/aemonge/repos",
"events_url": "https://api.github.com/users/aemonge/events{/privacy}",
"received_events_url": "https://api.github.com/users/aemonge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1953/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1548
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1548/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1548/comments
|
https://api.github.com/repos/ollama/ollama/issues/1548/events
|
https://github.com/ollama/ollama/issues/1548
| 2,044,105,727
|
I_kwDOJ0Z1Ps551pP_
| 1,548
|
When is the Windows Version of Ollama Coming out?
|
{
"login": "Arnav3241",
"id": 105989942,
"node_id": "U_kgDOBlFHNg",
"avatar_url": "https://avatars.githubusercontent.com/u/105989942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arnav3241",
"html_url": "https://github.com/Arnav3241",
"followers_url": "https://api.github.com/users/Arnav3241/followers",
"following_url": "https://api.github.com/users/Arnav3241/following{/other_user}",
"gists_url": "https://api.github.com/users/Arnav3241/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arnav3241/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arnav3241/subscriptions",
"organizations_url": "https://api.github.com/users/Arnav3241/orgs",
"repos_url": "https://api.github.com/users/Arnav3241/repos",
"events_url": "https://api.github.com/users/Arnav3241/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arnav3241/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2023-12-15T17:32:58
| 2024-03-02T14:27:51
| 2023-12-19T19:30:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey there,
When is the Windows Version of Ollama Coming out?
I am several hundreds of people are waiting for it eagerly including since the beginning of this project. Hope to see that update soon as most people use Windows.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1548/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5835
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5835/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5835/comments
|
https://api.github.com/repos/ollama/ollama/issues/5835/events
|
https://github.com/ollama/ollama/issues/5835
| 2,421,642,075
|
I_kwDOJ0Z1Ps6QV1Nb
| 5,835
|
orian-ollama-webui
|
{
"login": "werruww",
"id": 157249411,
"node_id": "U_kgDOCV9vgw",
"avatar_url": "https://avatars.githubusercontent.com/u/157249411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/werruww",
"html_url": "https://github.com/werruww",
"followers_url": "https://api.github.com/users/werruww/followers",
"following_url": "https://api.github.com/users/werruww/following{/other_user}",
"gists_url": "https://api.github.com/users/werruww/gists{/gist_id}",
"starred_url": "https://api.github.com/users/werruww/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/werruww/subscriptions",
"organizations_url": "https://api.github.com/users/werruww/orgs",
"repos_url": "https://api.github.com/users/werruww/repos",
"events_url": "https://api.github.com/users/werruww/events{/privacy}",
"received_events_url": "https://api.github.com/users/werruww/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-21T23:11:13
| 2024-09-04T04:08:32
| 2024-09-04T04:08:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Failed to post request http://Localhost:11434
edge
win10
The extension works on http://localhost:11434 but gives an error
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5835/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4794
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4794/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4794/comments
|
https://api.github.com/repos/ollama/ollama/issues/4794/events
|
https://github.com/ollama/ollama/issues/4794
| 2,330,482,605
|
I_kwDOJ0Z1Ps6K6Fet
| 4,794
|
Error: llama runner process has terminated: exit status 0xc000001d
|
{
"login": "Ecthellin203",
"id": 94040890,
"node_id": "U_kgDOBZrzOg",
"avatar_url": "https://avatars.githubusercontent.com/u/94040890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ecthellin203",
"html_url": "https://github.com/Ecthellin203",
"followers_url": "https://api.github.com/users/Ecthellin203/followers",
"following_url": "https://api.github.com/users/Ecthellin203/following{/other_user}",
"gists_url": "https://api.github.com/users/Ecthellin203/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ecthellin203/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ecthellin203/subscriptions",
"organizations_url": "https://api.github.com/users/Ecthellin203/orgs",
"repos_url": "https://api.github.com/users/Ecthellin203/repos",
"events_url": "https://api.github.com/users/Ecthellin203/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ecthellin203/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-06-03T08:17:11
| 2024-06-03T08:17:29
| 2024-06-03T08:17:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
run llama3:latest
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "Ecthellin203",
"id": 94040890,
"node_id": "U_kgDOBZrzOg",
"avatar_url": "https://avatars.githubusercontent.com/u/94040890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ecthellin203",
"html_url": "https://github.com/Ecthellin203",
"followers_url": "https://api.github.com/users/Ecthellin203/followers",
"following_url": "https://api.github.com/users/Ecthellin203/following{/other_user}",
"gists_url": "https://api.github.com/users/Ecthellin203/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ecthellin203/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ecthellin203/subscriptions",
"organizations_url": "https://api.github.com/users/Ecthellin203/orgs",
"repos_url": "https://api.github.com/users/Ecthellin203/repos",
"events_url": "https://api.github.com/users/Ecthellin203/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ecthellin203/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4794/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/8269
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8269/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8269/comments
|
https://api.github.com/repos/ollama/ollama/issues/8269/events
|
https://github.com/ollama/ollama/issues/8269
| 2,763,208,278
|
I_kwDOJ0Z1Ps6kszZW
| 8,269
|
Community Contribution: Open-Source Chinese Tutorial for Ollama
|
{
"login": "AXYZdong",
"id": 45477220,
"node_id": "MDQ6VXNlcjQ1NDc3MjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/45477220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AXYZdong",
"html_url": "https://github.com/AXYZdong",
"followers_url": "https://api.github.com/users/AXYZdong/followers",
"following_url": "https://api.github.com/users/AXYZdong/following{/other_user}",
"gists_url": "https://api.github.com/users/AXYZdong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AXYZdong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AXYZdong/subscriptions",
"organizations_url": "https://api.github.com/users/AXYZdong/orgs",
"repos_url": "https://api.github.com/users/AXYZdong/repos",
"events_url": "https://api.github.com/users/AXYZdong/events{/privacy}",
"received_events_url": "https://api.github.com/users/AXYZdong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-12-30T13:24:23
| 2024-12-30T13:35:09
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello, we'd like to contribute to the Ollama community by announcing the release of our open-source Chinese tutorial.
This tutorial aims to be comprehensive and easy to understand, covering:
- Ollama Introduction
- Ollama Installation and Configuration
- Custom Model Import
- Ollama REST API
- Using Ollama with LangChain
- Deployment of Ollama Visual Interfaces
- Application Examples
The repo is at : https://github.com/datawhalechina/handy-ollama
The tutorial is available at: https://datawhalechina.github.io/handy-ollama/
We would be happy to discuss the possibility of linking to this tutorial from the official documentation or resources to make it more accessible to Chinese users. Thank you for your consideration.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8269/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/6424
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6424/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6424/comments
|
https://api.github.com/repos/ollama/ollama/issues/6424/events
|
https://github.com/ollama/ollama/pull/6424
| 2,474,013,357
|
PR_kwDOJ0Z1Ps54xXBi
| 6,424
|
Fix overlapping artifact name on CI
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-19T19:08:23
| 2024-08-19T19:12:02
| 2024-08-19T19:11:58
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6424",
"html_url": "https://github.com/ollama/ollama/pull/6424",
"diff_url": "https://github.com/ollama/ollama/pull/6424.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6424.patch",
"merged_at": "2024-08-19T19:11:58"
}
| null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6424/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4722
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4722/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4722/comments
|
https://api.github.com/repos/ollama/ollama/issues/4722/events
|
https://github.com/ollama/ollama/issues/4722
| 2,325,607,927
|
I_kwDOJ0Z1Ps6KnfX3
| 4,722
|
Slower performance on Arm64 with Phi3 and Lexi-Llama on 1.39
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 1
| 2024-05-30T12:51:26
| 2024-05-31T18:18:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am getting 5x slower speeds with the new binary/version of ollama-linux-arm64 1.39 compared to the 1.38 release, and/or earlier.
Not sure what's going on. Am using Orange Pi 5 Plus.
Like I said - 1.38 works flawlessly, even on my Raspberry Pi 5.
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
1.39
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4722/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1819
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1819/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1819/comments
|
https://api.github.com/repos/ollama/ollama/issues/1819/events
|
https://github.com/ollama/ollama/pull/1819
| 2,068,346,258
|
PR_kwDOJ0Z1Ps5jXbwf
| 1,819
|
Support multiple LLM libs; ROCm v5 and v6; Rosetta, AVX, and AVX2 compatible CPU builds
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-06T03:52:13
| 2024-01-11T22:00:51
| 2024-01-11T22:00:48
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1819",
"html_url": "https://github.com/ollama/ollama/pull/1819",
"diff_url": "https://github.com/ollama/ollama/pull/1819.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1819.patch",
"merged_at": "2024-01-11T22:00:48"
}
|
In some cases we may want multiple variants for a given GPU type or CPU. This adds logic to have an optional Variant which we can use to select an optimal library, but also allows us to try multiple variants in case some fail to load.
This change includes updates to the Dockerfile.build to compile 2 variants for ROCm so we can support v5 and v6.
I've also added multiple CPU variants and runtime detection logic so we can support both lowest-common-denominator for really old CPUs (and rosetta emulation on macos) as well as more modern CPUs. At present, llama.cpp does not verify CPU features, so loading the wrong cpu variant will panic the whole process with illegal instruction. Ollama should autodetect the optimal llm library variant for the given system, but I've also added a fail-safe mechanism for users to be able to force a specific library to workaround problems should they arise.
This also converges the LLM library model to use dynamic loading for all scenarios instead of having a built-in static link for macos and linux. Windows was always fully dynamic, and now linux and macos follow the same pattern, so I was able to clean up the implementation and reduce some unnecessary complexity.
Fixes #1868
Fixes #1821
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1819/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2934
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2934/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2934/comments
|
https://api.github.com/repos/ollama/ollama/issues/2934/events
|
https://github.com/ollama/ollama/issues/2934
| 2,169,008,855
|
I_kwDOJ0Z1Ps6BSHLX
| 2,934
|
Unable to init GPU: unknown error
|
{
"login": "PLNech",
"id": 1821404,
"node_id": "MDQ6VXNlcjE4MjE0MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1821404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PLNech",
"html_url": "https://github.com/PLNech",
"followers_url": "https://api.github.com/users/PLNech/followers",
"following_url": "https://api.github.com/users/PLNech/following{/other_user}",
"gists_url": "https://api.github.com/users/PLNech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PLNech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PLNech/subscriptions",
"organizations_url": "https://api.github.com/users/PLNech/orgs",
"repos_url": "https://api.github.com/users/PLNech/repos",
"events_url": "https://api.github.com/users/PLNech/events{/privacy}",
"received_events_url": "https://api.github.com/users/PLNech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2024-03-05T11:54:16
| 2024-08-08T02:34:33
| 2024-03-24T18:15:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi there! My ollama-based project (thanks for the amazing framework <3) suddenly stopped using the GPU as backend.
It used to work well and I could confirm that the `GPU layers` offloading was happening from logs a few days ago.
Today the specific error I see in journals are: `Failed to load dynamic library /tmp/ollama3406780784/cuda_v11/libext_server.so `
Here's my journalctl relevant output. At first GPU is well detected, CUDA too:
```
Mar 02 18:35:37 XPS24 systemd[1]: Started Ollama Service.
Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.066+01:00 level=INFO source=images.go:710 msg="total blobs: 63"
Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.068+01:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.068+01:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.27)"
Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.068+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.311+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [rocm_v5 rocm_v6 cpu_avx cuda_v11 cpu cpu_avx2]"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.311+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.311+01:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.314+01:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08]"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.324+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.324+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.330+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
```
but then the GPU library .so seems to fail loading for an undescribed reason:
```
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3406780784/cuda_v11/libext_server.so"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama3406780784/cuda_v11/libext_server.so Unable to init GPU: unknown error"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3406780784/cpu_avx2/libext_server.so"
```
Any idea what could cause this issue? NVIDIA & CUDA seem fine on my machine, see output:
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4080 ... On | 00000000:01:00.0 Off | N/A |
| N/A 50C P4 18W / 60W | 10MiB / 12282MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 4130 G /usr/lib/xorg/Xorg 4MiB |
+---------------------------------------------------------------------------------------+
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2934/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2934/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4317
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4317/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4317/comments
|
https://api.github.com/repos/ollama/ollama/issues/4317/events
|
https://github.com/ollama/ollama/pull/4317
| 2,290,049,993
|
PR_kwDOJ0Z1Ps5vHajY
| 4,317
|
Don't clamp ctx size in `PredictServerFit`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-10T16:32:57
| 2024-05-10T17:17:12
| 2024-05-10T17:17:12
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4317",
"html_url": "https://github.com/ollama/ollama/pull/4317",
"diff_url": "https://github.com/ollama/ollama/pull/4317.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4317.patch",
"merged_at": "2024-05-10T17:17:12"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4317/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4030
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4030/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4030/comments
|
https://api.github.com/repos/ollama/ollama/issues/4030/events
|
https://github.com/ollama/ollama/issues/4030
| 2,269,360,438
|
I_kwDOJ0Z1Ps6HQ7E2
| 4,030
|
Integrate json_repair package to overcome the slow json feature
|
{
"login": "ArslanKAS",
"id": 43797457,
"node_id": "MDQ6VXNlcjQzNzk3NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/43797457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArslanKAS",
"html_url": "https://github.com/ArslanKAS",
"followers_url": "https://api.github.com/users/ArslanKAS/followers",
"following_url": "https://api.github.com/users/ArslanKAS/following{/other_user}",
"gists_url": "https://api.github.com/users/ArslanKAS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArslanKAS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArslanKAS/subscriptions",
"organizations_url": "https://api.github.com/users/ArslanKAS/orgs",
"repos_url": "https://api.github.com/users/ArslanKAS/repos",
"events_url": "https://api.github.com/users/ArslanKAS/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArslanKAS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-29T15:36:54
| 2024-05-01T22:37:35
| 2024-05-01T22:37:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi there,
Ollama is a great tool to get started with LLMs but the issue I'm facing is the JSON processing. The JSON format enforcer argument by the Ollama package just takes too long. It turns a 8 second response from LLaMa3 into a 90 second response. I don't know why it takes too long. I've come across a great package called "json_repair" that fixes broken JSON responses.
If the Ollama team can kindly integrate it so whenever the LLMs give out a broken JSON response, it can be fixed within 1 or 2 seconds. That'd be a great help in Function Calling too.
Here's the repo:
https://github.com/mangiucugna/json_repair
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4030/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/668
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/668/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/668/comments
|
https://api.github.com/repos/ollama/ollama/issues/668/events
|
https://github.com/ollama/ollama/issues/668
| 1,920,910,935
|
I_kwDOJ0Z1Ps5yfsZX
| 668
|
Client only displays `Unexpected EOF` when error happens during `/generate`
|
{
"login": "ratnadeep007",
"id": 17860233,
"node_id": "MDQ6VXNlcjE3ODYwMjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/17860233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratnadeep007",
"html_url": "https://github.com/ratnadeep007",
"followers_url": "https://api.github.com/users/ratnadeep007/followers",
"following_url": "https://api.github.com/users/ratnadeep007/following{/other_user}",
"gists_url": "https://api.github.com/users/ratnadeep007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratnadeep007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratnadeep007/subscriptions",
"organizations_url": "https://api.github.com/users/ratnadeep007/orgs",
"repos_url": "https://api.github.com/users/ratnadeep007/repos",
"events_url": "https://api.github.com/users/ratnadeep007/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratnadeep007/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 12
| 2023-10-01T18:42:59
| 2024-03-05T19:16:47
| 2023-10-27T20:55:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Issue:
codellama 13b run while codellama 7b fails with following error:
`Error: error reading llm response: unexpected EOF`
I can codellama 13b with same prompt.
I have 16GB RAM
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/668/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/668/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3489
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3489/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3489/comments
|
https://api.github.com/repos/ollama/ollama/issues/3489/events
|
https://github.com/ollama/ollama/issues/3489
| 2,225,537,142
|
I_kwDOJ0Z1Ps6EpwB2
| 3,489
|
Ollama goes into uninterruptible sleep mode and cannot be shutdown
|
{
"login": "sumitkm",
"id": 578882,
"node_id": "MDQ6VXNlcjU3ODg4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/578882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumitkm",
"html_url": "https://github.com/sumitkm",
"followers_url": "https://api.github.com/users/sumitkm/followers",
"following_url": "https://api.github.com/users/sumitkm/following{/other_user}",
"gists_url": "https://api.github.com/users/sumitkm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumitkm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumitkm/subscriptions",
"organizations_url": "https://api.github.com/users/sumitkm/orgs",
"repos_url": "https://api.github.com/users/sumitkm/repos",
"events_url": "https://api.github.com/users/sumitkm/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumitkm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8
| 2024-04-04T13:32:39
| 2024-05-21T18:27:04
| 2024-05-21T18:27:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This is probably same as #2029 but I'll try and provide more details.
As mentioned in the above issue Ollama works fine from a clean boot. If I put computer to sleep and then wake it up, Ollama gets into a state where it can't be `stopped` via `systemctl` or killed via `kill <pid>`. Here is a screenshot from `htop` after I tried to `stop` it (after waking computer up from sleep).

Output of `systemctl status ollama.service`
```
sumitkm@PIOT-WORK-02:~$ sudo systemctl status ollama.service
× ollama.service - Ollama Service
Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
Active: failed (Result: timeout) since Thu 2024-04-04 13:34:55 BST; 4min 30s ago
Duration: 30min 53.337s
Main PID: 1675
Tasks: 1 (limit: 114811)
Memory: 339.5M
CPU: 4.780s
CGroup: /system.slice/ollama.service
└─1675 "[ollama]"
Apr 04 13:30:24 PIOT-WORK-02 systemd[1]: ollama.service: State 'stop-sigterm' timed out. Killing.
Apr 04 13:30:24 PIOT-WORK-02 systemd[1]: ollama.service: Killing process 1675 (ollama) with signal SIGKILL.
Apr 04 13:31:54 PIOT-WORK-02 systemd[1]: ollama.service: Processes still around after SIGKILL. Ignoring.
Apr 04 13:33:24 PIOT-WORK-02 systemd[1]: ollama.service: State 'final-sigterm' timed out. Killing.
Apr 04 13:33:24 PIOT-WORK-02 systemd[1]: ollama.service: Killing process 1675 (ollama) with signal SIGKILL.
Apr 04 13:34:55 PIOT-WORK-02 systemd[1]: ollama.service: Processes still around after final SIGKILL. Entering failed mode.
Apr 04 13:34:55 PIOT-WORK-02 systemd[1]: ollama.service: Failed with result 'timeout'.
Apr 04 13:34:55 PIOT-WORK-02 systemd[1]: ollama.service: Unit process 1675 (ollama) remains running after unit stopped.
Apr 04 13:34:55 PIOT-WORK-02 systemd[1]: Stopped ollama.service - Ollama Service.
Apr 04 13:34:55 PIOT-WORK-02 systemd[1]: ollama.service: Consumed 4.780s CPU time.
```
Even if I don't try to kill it, ollama prevents system from going to sleep completely. Reboot attempts to kill it thrice and then forces reboot anyway.
Workaround
==========
If I do a `sudo systemctl stop ollama.service` before I put computer to sleep, everything works great.
journalctl -u ollama
====================
[logs-ollama.txt](https://github.com/ollama/ollama/files/14871920/logs-ollama.txt)
Hardware
--------
CPU: AMD Ryzen 7950x (AMD Eco mode 105W via bios settings)
RAM: 96 GB DDR 5
GPU: NVIDIA RTX 4090 (MSI Gaming X Slim card)
hostnamectl
-----------
Static hostname: PIOT-WORK-02
Icon name: computer-desktop
Chassis: desktop 🖥️
Machine ID: f3d60fb0a8c84b7aa9eb8e**********
Boot ID: 0b1a6283d6d04921a0f685**********
Operating System: Debian GNU/Linux 12 (bookworm)
Kernel: Linux 6.1.0-18-amd64
Architecture: x86-64
Hardware Vendor: ASUS
Hardware Model: ROG CROSSHAIR X670E HERO
Firmware Version: 1905
### What did you expect to see?
Ollama should work correctly when the system wakes up from sleep, that includes going into sleep mode the second time and subsequent ocassions.
### Steps to reproduce
Steps mentioned in description are reproducible reliably on my computer.
### Are there any recent changes that introduced the issue?
This is a brand new system and apart from `ollama` and `phronix` test suite, there is barely anything else on the system.
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.30
### GPU
Nvidia
### GPU info
```
nvidia-smi
Thu Apr 4 14:27:29 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14 Driver Version: 550.54.14 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 On | 00000000:01:00.0 On | Off |
| 0% 45C P8 17W / 450W | 1220MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2042 G /usr/lib/xorg/Xorg 383MiB |
| 0 N/A N/A 2224 G /usr/bin/gnome-shell 123MiB |
| 0 N/A N/A 4123 G ...sumitkm/AppImages/Telegram/Telegram 7MiB |
| 0 N/A N/A 4381 G /usr/lib/firefox/firefox-bin 0MiB |
| 0 N/A N/A 4790 G ...sion,SpareRendererForSitePerProcess 79MiB |
| 0 N/A N/A 5198 G /usr/bin/nautilus 24MiB |
| 0 N/A N/A 5272 G ...erProcess --variations-seed-version 23MiB |
| 0 N/A N/A 5881 C /usr/local/bin/ollama 0MiB |
+-----------------------------------------------------------------------------------------+
```
### CPU
AMD
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3489/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/3489/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3773
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3773/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3773/comments
|
https://api.github.com/repos/ollama/ollama/issues/3773/events
|
https://github.com/ollama/ollama/pull/3773
| 2,254,499,244
|
PR_kwDOJ0Z1Ps5tPwKW
| 3,773
|
Add PHP as an example
|
{
"login": "alnutile",
"id": 365385,
"node_id": "MDQ6VXNlcjM2NTM4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/365385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alnutile",
"html_url": "https://github.com/alnutile",
"followers_url": "https://api.github.com/users/alnutile/followers",
"following_url": "https://api.github.com/users/alnutile/following{/other_user}",
"gists_url": "https://api.github.com/users/alnutile/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alnutile/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alnutile/subscriptions",
"organizations_url": "https://api.github.com/users/alnutile/orgs",
"repos_url": "https://api.github.com/users/alnutile/repos",
"events_url": "https://api.github.com/users/alnutile/events{/privacy}",
"received_events_url": "https://api.github.com/users/alnutile/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-20T10:53:16
| 2024-11-21T17:59:34
| 2024-11-21T17:59:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3773",
"html_url": "https://github.com/ollama/ollama/pull/3773",
"diff_url": "https://github.com/ollama/ollama/pull/3773.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3773.patch",
"merged_at": null
}
|
I could turn this into a markdown file if that is better
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3773/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3168
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3168/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3168/comments
|
https://api.github.com/repos/ollama/ollama/issues/3168/events
|
https://github.com/ollama/ollama/issues/3168
| 2,188,411,676
|
I_kwDOJ0Z1Ps6CcIMc
| 3,168
|
WIndows security: ALERT: Trojan:Script/Wacatac.B!ml detected
|
{
"login": "jabbor",
"id": 50017850,
"node_id": "MDQ6VXNlcjUwMDE3ODUw",
"avatar_url": "https://avatars.githubusercontent.com/u/50017850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jabbor",
"html_url": "https://github.com/jabbor",
"followers_url": "https://api.github.com/users/jabbor/followers",
"following_url": "https://api.github.com/users/jabbor/following{/other_user}",
"gists_url": "https://api.github.com/users/jabbor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jabbor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jabbor/subscriptions",
"organizations_url": "https://api.github.com/users/jabbor/orgs",
"repos_url": "https://api.github.com/users/jabbor/repos",
"events_url": "https://api.github.com/users/jabbor/events{/privacy}",
"received_events_url": "https://api.github.com/users/jabbor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2024-03-15T12:30:41
| 2024-08-14T03:25:17
| 2024-03-20T15:53:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This program is dangerous and executes commands from an attacker.
File C:\Users\xxxxxxxxx\AppData\Local\Programs\Ollama\ollama app.exe
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3168/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8555
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8555/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8555/comments
|
https://api.github.com/repos/ollama/ollama/issues/8555/events
|
https://github.com/ollama/ollama/issues/8555
| 2,808,363,638
|
I_kwDOJ0Z1Ps6nZDp2
| 8,555
|
Request for Custom Installation Directory in Ollama on Windows to Avoid System Drive Space Issues
|
{
"login": "lanyur",
"id": 721637,
"node_id": "MDQ6VXNlcjcyMTYzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/721637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lanyur",
"html_url": "https://github.com/lanyur",
"followers_url": "https://api.github.com/users/lanyur/followers",
"following_url": "https://api.github.com/users/lanyur/following{/other_user}",
"gists_url": "https://api.github.com/users/lanyur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lanyur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lanyur/subscriptions",
"organizations_url": "https://api.github.com/users/lanyur/orgs",
"repos_url": "https://api.github.com/users/lanyur/repos",
"events_url": "https://api.github.com/users/lanyur/events{/privacy}",
"received_events_url": "https://api.github.com/users/lanyur/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2025-01-24T02:42:58
| 2025-01-24T09:25:18
| 2025-01-24T09:25:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I hope that the installation of Ollama on Windows can include an option to specify the installation directory. The system drive on Windows often runs out of space due to the Windows update mechanism. Moreover, if there is no configuration to specify the directory during installation, the downloaded large model files will also be installed under the system drive by default, which is very inconvenient.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8555/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2590
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2590/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2590/comments
|
https://api.github.com/repos/ollama/ollama/issues/2590/events
|
https://github.com/ollama/ollama/issues/2590
| 2,142,106,142
|
I_kwDOJ0Z1Ps5_rfIe
| 2,590
|
Read-only file system
|
{
"login": "iaoxuesheng",
"id": 94165844,
"node_id": "U_kgDOBZzbVA",
"avatar_url": "https://avatars.githubusercontent.com/u/94165844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iaoxuesheng",
"html_url": "https://github.com/iaoxuesheng",
"followers_url": "https://api.github.com/users/iaoxuesheng/followers",
"following_url": "https://api.github.com/users/iaoxuesheng/following{/other_user}",
"gists_url": "https://api.github.com/users/iaoxuesheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iaoxuesheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iaoxuesheng/subscriptions",
"organizations_url": "https://api.github.com/users/iaoxuesheng/orgs",
"repos_url": "https://api.github.com/users/iaoxuesheng/repos",
"events_url": "https://api.github.com/users/iaoxuesheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/iaoxuesheng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-19T11:04:07
| 2024-02-20T06:22:19
| 2024-02-20T06:22:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
在运行curl -fsSL https://ollama.com/install.sh | sh
时报错

|
{
"login": "iaoxuesheng",
"id": 94165844,
"node_id": "U_kgDOBZzbVA",
"avatar_url": "https://avatars.githubusercontent.com/u/94165844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iaoxuesheng",
"html_url": "https://github.com/iaoxuesheng",
"followers_url": "https://api.github.com/users/iaoxuesheng/followers",
"following_url": "https://api.github.com/users/iaoxuesheng/following{/other_user}",
"gists_url": "https://api.github.com/users/iaoxuesheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iaoxuesheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iaoxuesheng/subscriptions",
"organizations_url": "https://api.github.com/users/iaoxuesheng/orgs",
"repos_url": "https://api.github.com/users/iaoxuesheng/repos",
"events_url": "https://api.github.com/users/iaoxuesheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/iaoxuesheng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2590/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7116
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7116/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7116/comments
|
https://api.github.com/repos/ollama/ollama/issues/7116/events
|
https://github.com/ollama/ollama/issues/7116
| 2,570,511,676
|
I_kwDOJ0Z1Ps6ZNuU8
| 7,116
|
Make target audience clearer on website/in repository README
|
{
"login": "victorb",
"id": 459764,
"node_id": "MDQ6VXNlcjQ1OTc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/459764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/victorb",
"html_url": "https://github.com/victorb",
"followers_url": "https://api.github.com/users/victorb/followers",
"following_url": "https://api.github.com/users/victorb/following{/other_user}",
"gists_url": "https://api.github.com/users/victorb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/victorb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/victorb/subscriptions",
"organizations_url": "https://api.github.com/users/victorb/orgs",
"repos_url": "https://api.github.com/users/victorb/repos",
"events_url": "https://api.github.com/users/victorb/events{/privacy}",
"received_events_url": "https://api.github.com/users/victorb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-10-07T13:51:59
| 2024-10-17T20:34:28
| 2024-10-17T20:34:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, it seems like Ollama is mostly geared towards developers and adjacent audiences, yet that isn't 100% clear on the website or in the repository README.
This results with many end users who might be better served by other tooling, give Ollama a try regardless, even though it might be overly technical compared to their knowledge, expertise and expectation.
In many cases these users end up confused and frustrated as they'll wade into deep technical waters without having the prerequisite understanding first.
One potential fix for this is to make it explicitly clear that Ollama is made for developers and alike, on the website and in the repository README.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7116/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7116/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3953
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3953/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3953/comments
|
https://api.github.com/repos/ollama/ollama/issues/3953/events
|
https://github.com/ollama/ollama/issues/3953
| 2,266,389,526
|
I_kwDOJ0Z1Ps6HFlwW
| 3,953
|
Support VLLM as a backend
|
{
"login": "kannon92",
"id": 3780425,
"node_id": "MDQ6VXNlcjM3ODA0MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3780425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kannon92",
"html_url": "https://github.com/kannon92",
"followers_url": "https://api.github.com/users/kannon92/followers",
"following_url": "https://api.github.com/users/kannon92/following{/other_user}",
"gists_url": "https://api.github.com/users/kannon92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kannon92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kannon92/subscriptions",
"organizations_url": "https://api.github.com/users/kannon92/orgs",
"repos_url": "https://api.github.com/users/kannon92/repos",
"events_url": "https://api.github.com/users/kannon92/events{/privacy}",
"received_events_url": "https://api.github.com/users/kannon92/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 4
| 2024-04-26T19:29:55
| 2024-12-23T14:37:43
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I realize that this is a big ask but I am learning more and more about inferencing and I've heard that VLLM tends to have better performance for many GPU training.
OLLAMA is a great UX and I love the tight integration with llama.cpp. But it would be nice to start exploring how one could use OLLAMA models with vllm.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3953/reactions",
"total_count": 14,
"+1": 14,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3953/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1581
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1581/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1581/comments
|
https://api.github.com/repos/ollama/ollama/issues/1581/events
|
https://github.com/ollama/ollama/issues/1581
| 2,046,621,921
|
I_kwDOJ0Z1Ps55_Pjh
| 1,581
|
I'm using ollama to run via API on Debian (WSL). When I give the command "ollama serve" the terminal is busy
|
{
"login": "EliasPereirah",
"id": 16616409,
"node_id": "MDQ6VXNlcjE2NjE2NDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/16616409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EliasPereirah",
"html_url": "https://github.com/EliasPereirah",
"followers_url": "https://api.github.com/users/EliasPereirah/followers",
"following_url": "https://api.github.com/users/EliasPereirah/following{/other_user}",
"gists_url": "https://api.github.com/users/EliasPereirah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EliasPereirah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EliasPereirah/subscriptions",
"organizations_url": "https://api.github.com/users/EliasPereirah/orgs",
"repos_url": "https://api.github.com/users/EliasPereirah/repos",
"events_url": "https://api.github.com/users/EliasPereirah/events{/privacy}",
"received_events_url": "https://api.github.com/users/EliasPereirah/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-18T13:02:43
| 2023-12-19T18:58:56
| 2023-12-19T18:58:56
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |

|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1581/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2635
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2635/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2635/comments
|
https://api.github.com/repos/ollama/ollama/issues/2635/events
|
https://github.com/ollama/ollama/issues/2635
| 2,146,908,129
|
I_kwDOJ0Z1Ps5_9zfh
| 2,635
|
questions for mistral
|
{
"login": "VadimBoev",
"id": 33985574,
"node_id": "MDQ6VXNlcjMzOTg1NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/33985574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VadimBoev",
"html_url": "https://github.com/VadimBoev",
"followers_url": "https://api.github.com/users/VadimBoev/followers",
"following_url": "https://api.github.com/users/VadimBoev/following{/other_user}",
"gists_url": "https://api.github.com/users/VadimBoev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VadimBoev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VadimBoev/subscriptions",
"organizations_url": "https://api.github.com/users/VadimBoev/orgs",
"repos_url": "https://api.github.com/users/VadimBoev/repos",
"events_url": "https://api.github.com/users/VadimBoev/events{/privacy}",
"received_events_url": "https://api.github.com/users/VadimBoev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-02-21T14:34:16
| 2024-02-21T20:40:44
| 2024-02-21T20:40:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi.
How do I get him to respond in only one language?
And how to remove the censorious language from him so that he can express himself obscenely?
|
{
"login": "VadimBoev",
"id": 33985574,
"node_id": "MDQ6VXNlcjMzOTg1NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/33985574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VadimBoev",
"html_url": "https://github.com/VadimBoev",
"followers_url": "https://api.github.com/users/VadimBoev/followers",
"following_url": "https://api.github.com/users/VadimBoev/following{/other_user}",
"gists_url": "https://api.github.com/users/VadimBoev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VadimBoev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VadimBoev/subscriptions",
"organizations_url": "https://api.github.com/users/VadimBoev/orgs",
"repos_url": "https://api.github.com/users/VadimBoev/repos",
"events_url": "https://api.github.com/users/VadimBoev/events{/privacy}",
"received_events_url": "https://api.github.com/users/VadimBoev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2635/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2635/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2311
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2311/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2311/comments
|
https://api.github.com/repos/ollama/ollama/issues/2311/events
|
https://github.com/ollama/ollama/pull/2311
| 2,112,495,826
|
PR_kwDOJ0Z1Ps5lszEY
| 2,311
|
Gosec
|
{
"login": "alpe",
"id": 28003,
"node_id": "MDQ6VXNlcjI4MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/28003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alpe",
"html_url": "https://github.com/alpe",
"followers_url": "https://api.github.com/users/alpe/followers",
"following_url": "https://api.github.com/users/alpe/following{/other_user}",
"gists_url": "https://api.github.com/users/alpe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alpe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alpe/subscriptions",
"organizations_url": "https://api.github.com/users/alpe/orgs",
"repos_url": "https://api.github.com/users/alpe/repos",
"events_url": "https://api.github.com/users/alpe/events{/privacy}",
"received_events_url": "https://api.github.com/users/alpe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-01T13:16:48
| 2024-11-21T08:15:27
| 2024-11-21T08:15:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2311",
"html_url": "https://github.com/ollama/ollama/pull/2311",
"diff_url": "https://github.com/ollama/ollama/pull/2311.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2311.patch",
"merged_at": null
}
|
Start fixing some [gosec](https://github.com/securego/gosec) reports
* more restrictive file permission for ~/.ollama files
* add `--request-header-timeout` param to server to prevent [slowloris](https://www.netscout.com/what-is-ddos/slowloris-attacks) DDos
* annotate false positive in `llm/payload_common.go` and DRY
Please note that there are still open issues that need to be addressed or annotated with more context:
* `cmd/cmd.go:761:12`: G204: Subprocess launched with a potential tainted input or cmd arguments (gosec)
* `server/upload.go:5:2`: G501: Blocklisted import crypto/md5: weak cryptographic primitive (gosec)
It would be good to add `gosec` to the `.golangci.yaml` linter config and CI when all issues are addressed
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2311/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2311/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/85
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/85/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/85/comments
|
https://api.github.com/repos/ollama/ollama/issues/85/events
|
https://github.com/ollama/ollama/issues/85
| 1,806,769,240
|
I_kwDOJ0Z1Ps5rsRxY
| 85
|
Too many tokens error not provided to users
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-07-17T00:15:57
| 2023-07-19T02:41:56
| 2023-07-19T02:41:56
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When running a model and passing a context larger than the context window, the `ollama` server prints an error: `llama_tokenize: too many tokens` but not the client.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/85/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/85/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3521
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3521/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3521/comments
|
https://api.github.com/repos/ollama/ollama/issues/3521/events
|
https://github.com/ollama/ollama/issues/3521
| 2,229,593,240
|
I_kwDOJ0Z1Ps6E5OSY
| 3,521
|
Run ollama list/pull got an error
|
{
"login": "imagebody",
"id": 11532038,
"node_id": "MDQ6VXNlcjExNTMyMDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/11532038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imagebody",
"html_url": "https://github.com/imagebody",
"followers_url": "https://api.github.com/users/imagebody/followers",
"following_url": "https://api.github.com/users/imagebody/following{/other_user}",
"gists_url": "https://api.github.com/users/imagebody/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imagebody/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imagebody/subscriptions",
"organizations_url": "https://api.github.com/users/imagebody/orgs",
"repos_url": "https://api.github.com/users/imagebody/repos",
"events_url": "https://api.github.com/users/imagebody/events{/privacy}",
"received_events_url": "https://api.github.com/users/imagebody/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 9
| 2024-04-07T06:40:45
| 2024-05-18T04:13:35
| 2024-05-18T04:13:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I run ollama in docker container,like this:
ollama pull llama2:70b
Then, I get an error:
Error: something went wrong, please see the ollama server logs for details
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
_No response_
### Platform
Docker
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3521/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/754
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/754/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/754/comments
|
https://api.github.com/repos/ollama/ollama/issues/754/events
|
https://github.com/ollama/ollama/issues/754
| 1,936,679,010
|
I_kwDOJ0Z1Ps5zb2Bi
| 754
|
Support for Autogen
|
{
"login": "greg-peters",
"id": 32651304,
"node_id": "MDQ6VXNlcjMyNjUxMzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/32651304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/greg-peters",
"html_url": "https://github.com/greg-peters",
"followers_url": "https://api.github.com/users/greg-peters/followers",
"following_url": "https://api.github.com/users/greg-peters/following{/other_user}",
"gists_url": "https://api.github.com/users/greg-peters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/greg-peters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/greg-peters/subscriptions",
"organizations_url": "https://api.github.com/users/greg-peters/orgs",
"repos_url": "https://api.github.com/users/greg-peters/repos",
"events_url": "https://api.github.com/users/greg-peters/events{/privacy}",
"received_events_url": "https://api.github.com/users/greg-peters/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-10-11T02:44:02
| 2023-12-19T05:51:14
| 2023-10-11T13:24:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
#305 Requesting support to use ollama with Autogen
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/754/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/754/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3508
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3508/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3508/comments
|
https://api.github.com/repos/ollama/ollama/issues/3508/events
|
https://github.com/ollama/ollama/pull/3508
| 2,228,998,613
|
PR_kwDOJ0Z1Ps5r5ZPc
| 3,508
|
no rope parameters
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-06T01:00:32
| 2024-04-06T01:46:07
| 2024-04-06T01:46:06
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3508",
"html_url": "https://github.com/ollama/ollama/pull/3508",
"diff_url": "https://github.com/ollama/ollama/pull/3508.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3508.patch",
"merged_at": "2024-04-06T01:46:06"
}
|
rope parameters should be omitted because the model should set it
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3508/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/930
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/930/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/930/comments
|
https://api.github.com/repos/ollama/ollama/issues/930/events
|
https://github.com/ollama/ollama/issues/930
| 1,964,773,211
|
I_kwDOJ0Z1Ps51HA9b
| 930
|
FR: Enhance command for pull and run
|
{
"login": "hemanth",
"id": 18315,
"node_id": "MDQ6VXNlcjE4MzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/18315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemanth",
"html_url": "https://github.com/hemanth",
"followers_url": "https://api.github.com/users/hemanth/followers",
"following_url": "https://api.github.com/users/hemanth/following{/other_user}",
"gists_url": "https://api.github.com/users/hemanth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hemanth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemanth/subscriptions",
"organizations_url": "https://api.github.com/users/hemanth/orgs",
"repos_url": "https://api.github.com/users/hemanth/repos",
"events_url": "https://api.github.com/users/hemanth/events{/privacy}",
"received_events_url": "https://api.github.com/users/hemanth/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-10-27T05:03:59
| 2023-10-27T05:23:15
| 2023-10-27T05:23:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Instead of:
```sh
ollama pull hemanth/booksummarizer
ollama run hemanth/booksummarizer
```
It would be nice to have
```sh
ollama --pull-run hemanth/booksummarizer
```
or
```sh
ollama pullrun hemanth/booksummarizer
```
or
```sh
ollama pull hemanth/booksummarizer --run
```
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/930/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6802
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6802/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6802/comments
|
https://api.github.com/repos/ollama/ollama/issues/6802/events
|
https://github.com/ollama/ollama/issues/6802
| 2,526,417,861
|
I_kwDOJ0Z1Ps6WlhPF
| 6,802
|
Potential Bug in AMD GPU Detection leading to rocBLAS Initialization Failure
|
{
"login": "Bihan",
"id": 3397983,
"node_id": "MDQ6VXNlcjMzOTc5ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3397983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bihan",
"html_url": "https://github.com/Bihan",
"followers_url": "https://api.github.com/users/Bihan/followers",
"following_url": "https://api.github.com/users/Bihan/following{/other_user}",
"gists_url": "https://api.github.com/users/Bihan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bihan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bihan/subscriptions",
"organizations_url": "https://api.github.com/users/Bihan/orgs",
"repos_url": "https://api.github.com/users/Bihan/repos",
"events_url": "https://api.github.com/users/Bihan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bihan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-09-14T13:33:33
| 2024-10-26T21:04:15
| 2024-10-26T21:04:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This issue happens **randomly**—sometimes the AMD GPU detection works, but at other times, it fails, resulting in the following error
`rocBLAS error: Could not initialize Tensile host: No devices found.`
When the issue occurs, it seems that certain GPU nodes in the `/sys/class/kfd/kfd/topology/nodes/` directory may not be accessible due to `permission errors.` When these nodes are inaccessible, it might be leading to incorrect GPU indexing.
The issue might be related to the [AMDGetGPUInfo()](https://github.com/ollama/ollama/blob/56b9af336a049dae37f7cc62246121ac9d18576e/gpu/amd_linux.go#L84) function. This function skips nodes if permission errors occur when reading the properties file see [here](https://github.com/ollama/ollama/blob/56b9af336a049dae37f7cc62246121ac9d18576e/gpu/amd_linux.go#L104), which may cause some GPU nodes to be missed and result in incorrect GPU indexing.
Below are the output samples for when the GPU detection does not work (non-working case) and when it works as expected (working case).
**Non-Working Output (Failure Case):**
Here’s the output when the rocBLAS error occurs:
```
time=2024-09-09T08:39:19.349Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
...
time=2024-09-09T08:39:19.362Z level=INFO source=amd_linux.go:345 msg="amdgpu is supported" gpu=3 gpu_type=gfx942
...
time=2024-09-09T08:39:19.362Z level=INFO source=types.go:107 msg="inference compute" id=3 library=rocm variant="" compute=gfx942 driver=6.7 name=1002:74a1 total="192.0 GiB" available="191.7 GiB"
```
**Notice**, `gpu=3 gpu_type=gfx942`
**Working Output (Success Case):**
Here’s the output when serving works:
```
2024-09-09T13:55:25.972961384Z time=2024-09-09T13:55:25.972Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
2024-09-09T13:55:25.979418720Z time=2024-09-09T13:55:25.979Z level=INFO source=amd_linux.go:345 msg="amdgpu is supported" gpu=0 gpu_type=gfx942
...
...
2024-09-09T13:55:25.979551298Z time=2024-09-09T13:55:25.979Z level=INFO source=types.go:107 msg="inference compute" id=0 library=rocm variant="" compute=gfx942 driver=6.7 name=1002:74a1 total="192.0 GiB" available="191.7 GiB"
```
**Notice**, `gpu=0 gpu_type=gfx942`
**Important information about AMD GPU Drivers:**
The AMD GPU driver always exposes the host CPU(s) first, but HIP libraries expects zero is the first GPU, not the CPU
**Steps To Reproduce:**
1. Run a Docker container using `ollama/ollama:rocm` on a machine with a single `MI300X`
2. Inside the container, run ollama run llama3.1:70B
Related Discussion:[See](https://github.com/ollama/ollama/issues/6423)
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.9
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6802/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1023
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1023/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1023/comments
|
https://api.github.com/repos/ollama/ollama/issues/1023/events
|
https://github.com/ollama/ollama/pull/1023
| 1,980,201,430
|
PR_kwDOJ0Z1Ps5evs1E
| 1,023
|
docs: clarify where the models are stored in the faq
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-06T22:40:07
| 2023-11-08T01:59:55
| 2023-11-08T01:59:55
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1023",
"html_url": "https://github.com/ollama/ollama/pull/1023",
"diff_url": "https://github.com/ollama/ollama/pull/1023.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1023.patch",
"merged_at": "2023-11-08T01:59:55"
}
|
Where are the models is a common enough question in the Discord. This clarifies it a bit further helping the user understand the structure.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1023/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1023/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6801
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6801/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6801/comments
|
https://api.github.com/repos/ollama/ollama/issues/6801/events
|
https://github.com/ollama/ollama/issues/6801
| 2,526,318,686
|
I_kwDOJ0Z1Ps6WlJBe
| 6,801
|
Ollama can't update the binary
|
{
"login": "suizideFloat",
"id": 4026849,
"node_id": "MDQ6VXNlcjQwMjY4NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4026849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suizideFloat",
"html_url": "https://github.com/suizideFloat",
"followers_url": "https://api.github.com/users/suizideFloat/followers",
"following_url": "https://api.github.com/users/suizideFloat/following{/other_user}",
"gists_url": "https://api.github.com/users/suizideFloat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suizideFloat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suizideFloat/subscriptions",
"organizations_url": "https://api.github.com/users/suizideFloat/orgs",
"repos_url": "https://api.github.com/users/suizideFloat/repos",
"events_url": "https://api.github.com/users/suizideFloat/events{/privacy}",
"received_events_url": "https://api.github.com/users/suizideFloat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-09-14T12:00:58
| 2025-01-15T21:27:17
| 2025-01-15T21:27:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
AS I understand to update ollama to current version you run teh command ```curl -fsSL https://ollama.com/install.sh | sh ``` as if you install for the first time. At least I did that before and it worked.
Now I am running 0.3.8 ; "ollama version is 0.3.8" and want to update to 0.3.10 via the default curl command.
Always ends with the following error massage:
``` # curl -fsSL https://ollama.com/install.sh | sh ```
`` >>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%#=#=#
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now ``
Is there anything to fix that? Why is that happening? Any way to install it manually? How?
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.3.8
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6801/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4751
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4751/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4751/comments
|
https://api.github.com/repos/ollama/ollama/issues/4751/events
|
https://github.com/ollama/ollama/pull/4751
| 2,328,018,179
|
PR_kwDOJ0Z1Ps5xIwUy
| 4,751
|
Add golang pprof
|
{
"login": "llhhbc",
"id": 6076617,
"node_id": "MDQ6VXNlcjYwNzY2MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6076617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/llhhbc",
"html_url": "https://github.com/llhhbc",
"followers_url": "https://api.github.com/users/llhhbc/followers",
"following_url": "https://api.github.com/users/llhhbc/following{/other_user}",
"gists_url": "https://api.github.com/users/llhhbc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/llhhbc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/llhhbc/subscriptions",
"organizations_url": "https://api.github.com/users/llhhbc/orgs",
"repos_url": "https://api.github.com/users/llhhbc/repos",
"events_url": "https://api.github.com/users/llhhbc/events{/privacy}",
"received_events_url": "https://api.github.com/users/llhhbc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-05-31T14:47:31
| 2024-05-31T15:09:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4751",
"html_url": "https://github.com/ollama/ollama/pull/4751",
"diff_url": "https://github.com/ollama/ollama/pull/4751.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4751.patch",
"merged_at": null
}
|
Add golang pprof for analysis proc
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4751/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7928
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7928/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7928/comments
|
https://api.github.com/repos/ollama/ollama/issues/7928/events
|
https://github.com/ollama/ollama/issues/7928
| 2,716,809,386
|
I_kwDOJ0Z1Ps6h7ziq
| 7,928
|
ollama api(port:11434)Regarding parallel processing and CPU single core usage
|
{
"login": "dmmhk",
"id": 74334936,
"node_id": "MDQ6VXNlcjc0MzM0OTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/74334936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmmhk",
"html_url": "https://github.com/dmmhk",
"followers_url": "https://api.github.com/users/dmmhk/followers",
"following_url": "https://api.github.com/users/dmmhk/following{/other_user}",
"gists_url": "https://api.github.com/users/dmmhk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmmhk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmmhk/subscriptions",
"organizations_url": "https://api.github.com/users/dmmhk/orgs",
"repos_url": "https://api.github.com/users/dmmhk/repos",
"events_url": "https://api.github.com/users/dmmhk/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmmhk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-12-04T07:42:34
| 2024-12-23T08:03:48
| 2024-12-23T08:03:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**Regarding parallel processing and CPU single core usage**
API :http://localhost:11434/api/generate
The waiting time for API requests simultaneously is too long,CPU processing does not use multi-core;
Are there any optimization methods available?
**OS**
Windows 11
**GPU**
Nvidia 3080 16G
**CPU**
AMD R9 5900HX
**Ollama version**
0.4.5
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7928/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/4623
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4623/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4623/comments
|
https://api.github.com/repos/ollama/ollama/issues/4623/events
|
https://github.com/ollama/ollama/issues/4623
| 2,316,273,721
|
I_kwDOJ0Z1Ps6KD4g5
| 4,623
|
The ollama server is stopped when I submitted jobs parallelly
|
{
"login": "inoue0426",
"id": 8393063,
"node_id": "MDQ6VXNlcjgzOTMwNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8393063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inoue0426",
"html_url": "https://github.com/inoue0426",
"followers_url": "https://api.github.com/users/inoue0426/followers",
"following_url": "https://api.github.com/users/inoue0426/following{/other_user}",
"gists_url": "https://api.github.com/users/inoue0426/gists{/gist_id}",
"starred_url": "https://api.github.com/users/inoue0426/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/inoue0426/subscriptions",
"organizations_url": "https://api.github.com/users/inoue0426/orgs",
"repos_url": "https://api.github.com/users/inoue0426/repos",
"events_url": "https://api.github.com/users/inoue0426/events{/privacy}",
"received_events_url": "https://api.github.com/users/inoue0426/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-05-24T21:16:03
| 2024-08-09T23:22:40
| 2024-08-09T23:22:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
When I submitted below, suddenly the server just shows POST and doesn't show HEAD and stop.
Do you know how to deal with this?
## Python interface.
```python
def run_test(t):
subprocess.run(['./summary.sh', str(t)])
Parallel(n_jobs=-1)(delayed(run_test)(t) for t in tqdm(untested))
```
## shell script
```sh
(torch) ➜ review git:(main) ✗ cat summary.sh
#!/bin/zsh
output=$(./ollama-darwin run llama3:8b-instruct-q5_K_M \
"Summarise below prompt. No explanation, just return summarized text. \
INPUT TEXT: "${1}" ")
processed_output=$(echo "${output}" | tr -d ',' | tr '"' ',' | tr -d "'" | tr -d '\n')
echo "\"${1}\",\"${processed_output}\"" >> summary.csv% (torch) ➜ review git:(main) ✗
```
## ollama server log
```
[GIN] 2024/05/24 - 17:09:06 | 200 | 20.916µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/24 - 17:09:06 | 200 | 363µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/24 - 17:09:06 | 200 | 11.275035333s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:07 | 200 | 18.917µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/24 - 17:09:07 | 200 | 394.625µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/24 - 17:09:07 | 200 | 10.529125125s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:07 | 200 | 20.333µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/24 - 17:09:07 | 200 | 441.5µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/24 - 17:09:08 | 200 | 10.6834455s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:08 | 200 | 21.125µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/24 - 17:09:08 | 200 | 833.125µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/24 - 17:09:09 | 200 | 10.325507834s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:11 | 200 | 10.764251375s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:12 | 200 | 10.599380292s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:14 | 200 | 10.261547416s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:15 | 200 | 9.766182917s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:16 | 200 | 10.306288125s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:17 | 200 | 10.834983416s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:19 | 200 | 12.06265875s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:20 | 200 | 12.753880667s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:20 | 200 | 12.012651375s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/24 - 17:09:54 | 200 | 18.5µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/24 - 17:09:54 | 200 | 2.248125ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/24 - 17:09:55 | 200 | 1.189454292s | 127.0.0.1 | POST "/api/generate"
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.38
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4623/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1823
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1823/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1823/comments
|
https://api.github.com/repos/ollama/ollama/issues/1823/events
|
https://github.com/ollama/ollama/issues/1823
| 2,068,577,852
|
I_kwDOJ0Z1Ps57S_48
| 1,823
|
[ENHANCEMENT] Add github action for tests and lint on this repo.
|
{
"login": "rgaidot",
"id": 5269,
"node_id": "MDQ6VXNlcjUyNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rgaidot",
"html_url": "https://github.com/rgaidot",
"followers_url": "https://api.github.com/users/rgaidot/followers",
"following_url": "https://api.github.com/users/rgaidot/following{/other_user}",
"gists_url": "https://api.github.com/users/rgaidot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rgaidot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rgaidot/subscriptions",
"organizations_url": "https://api.github.com/users/rgaidot/orgs",
"repos_url": "https://api.github.com/users/rgaidot/repos",
"events_url": "https://api.github.com/users/rgaidot/events{/privacy}",
"received_events_url": "https://api.github.com/users/rgaidot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-01-06T12:13:40
| 2024-01-06T12:47:54
| 2024-01-06T12:18:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Currently, I saw that the tests and the linter were executed in another repo, it would be interesting to put it in the main repo. https://github.com/jmorganca/ollama/blob/mxyng/build-gpus/.github/workflows/test.yaml
|
{
"login": "rgaidot",
"id": 5269,
"node_id": "MDQ6VXNlcjUyNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rgaidot",
"html_url": "https://github.com/rgaidot",
"followers_url": "https://api.github.com/users/rgaidot/followers",
"following_url": "https://api.github.com/users/rgaidot/following{/other_user}",
"gists_url": "https://api.github.com/users/rgaidot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rgaidot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rgaidot/subscriptions",
"organizations_url": "https://api.github.com/users/rgaidot/orgs",
"repos_url": "https://api.github.com/users/rgaidot/repos",
"events_url": "https://api.github.com/users/rgaidot/events{/privacy}",
"received_events_url": "https://api.github.com/users/rgaidot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1823/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/488
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/488/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/488/comments
|
https://api.github.com/repos/ollama/ollama/issues/488/events
|
https://github.com/ollama/ollama/pull/488
| 1,886,606,495
|
PR_kwDOJ0Z1Ps5Z0giw
| 488
|
add cuda docker image
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-07T21:54:50
| 2023-09-08T14:38:21
| 2023-09-08T14:38:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/488",
"html_url": "https://github.com/ollama/ollama/pull/488",
"diff_url": "https://github.com/ollama/ollama/pull/488.diff",
"patch_url": "https://github.com/ollama/ollama/pull/488.patch",
"merged_at": "2023-09-08T14:38:20"
}
|
`Dockerfile.cuda` requires [`nvidia-container-toolkit`](https://gitlab.com/nvidia/container-toolkit/container-toolkit) to run successfully:
```
$ docker build -t ollama:cuda -f Dockerfile.cuda .
$ docker run -d --gpus=all -p 11434:11434 -v $HOME/.ollama:/home/ollama/.ollama ollama:cuda
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/488/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7906
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7906/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7906/comments
|
https://api.github.com/repos/ollama/ollama/issues/7906/events
|
https://github.com/ollama/ollama/pull/7906
| 2,711,123,848
|
PR_kwDOJ0Z1Ps6Dtyf4
| 7,906
|
Update README.md
|
{
"login": "dmayboroda",
"id": 2357342,
"node_id": "MDQ6VXNlcjIzNTczNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2357342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmayboroda",
"html_url": "https://github.com/dmayboroda",
"followers_url": "https://api.github.com/users/dmayboroda/followers",
"following_url": "https://api.github.com/users/dmayboroda/following{/other_user}",
"gists_url": "https://api.github.com/users/dmayboroda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmayboroda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmayboroda/subscriptions",
"organizations_url": "https://api.github.com/users/dmayboroda/orgs",
"repos_url": "https://api.github.com/users/dmayboroda/repos",
"events_url": "https://api.github.com/users/dmayboroda/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmayboroda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-12-02T09:01:12
| 2024-12-02T09:14:47
| 2024-12-02T09:14:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7906",
"html_url": "https://github.com/ollama/ollama/pull/7906",
"diff_url": "https://github.com/ollama/ollama/pull/7906.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7906.patch",
"merged_at": "2024-12-02T09:14:47"
}
|
Hey Ollama builders!
That's a small update for README, I added a Minima project that use OIlama docker container.
Thank you!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7906/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7715
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7715/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7715/comments
|
https://api.github.com/repos/ollama/ollama/issues/7715/events
|
https://github.com/ollama/ollama/issues/7715
| 2,666,880,721
|
I_kwDOJ0Z1Ps6e9V7R
| 7,715
|
Ollama 0.4 not using VRAM on AMD RX 7900 XTX
|
{
"login": "galizhur",
"id": 48592261,
"node_id": "MDQ6VXNlcjQ4NTkyMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/48592261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/galizhur",
"html_url": "https://github.com/galizhur",
"followers_url": "https://api.github.com/users/galizhur/followers",
"following_url": "https://api.github.com/users/galizhur/following{/other_user}",
"gists_url": "https://api.github.com/users/galizhur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/galizhur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galizhur/subscriptions",
"organizations_url": "https://api.github.com/users/galizhur/orgs",
"repos_url": "https://api.github.com/users/galizhur/repos",
"events_url": "https://api.github.com/users/galizhur/events{/privacy}",
"received_events_url": "https://api.github.com/users/galizhur/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-11-18T02:28:34
| 2024-12-04T10:45:22
| 2024-11-18T23:33:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Any model that I load seems to use system RAM instead the VRAM. This doesn't happen on version 0.3 of ollama. Something is clearly wrong because ollama freezes after a couple of requests and the only way to make it work again is to restart it every minute.
Here is the server log [server.log](https://github.com/user-attachments/files/17793658/server.log)
Also below I attached 2 screeshots with ollama with a loaded model and ollama stut down.
Model unloaded

Model loaded

### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.4.2
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7715/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8167
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8167/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8167/comments
|
https://api.github.com/repos/ollama/ollama/issues/8167/events
|
https://github.com/ollama/ollama/issues/8167
| 2,749,286,233
|
I_kwDOJ0Z1Ps6j3sdZ
| 8,167
|
Error: max retries exceeded for all ollama model pulls (read: connection reset by peer)
|
{
"login": "saisun229",
"id": 36823386,
"node_id": "MDQ6VXNlcjM2ODIzMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36823386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saisun229",
"html_url": "https://github.com/saisun229",
"followers_url": "https://api.github.com/users/saisun229/followers",
"following_url": "https://api.github.com/users/saisun229/following{/other_user}",
"gists_url": "https://api.github.com/users/saisun229/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saisun229/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saisun229/subscriptions",
"organizations_url": "https://api.github.com/users/saisun229/orgs",
"repos_url": "https://api.github.com/users/saisun229/repos",
"events_url": "https://api.github.com/users/saisun229/events{/privacy}",
"received_events_url": "https://api.github.com/users/saisun229/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-12-19T05:18:48
| 2025-01-13T07:26:25
| 2025-01-13T07:26:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama run llama3.1:8b
pulling manifest
pulling da63a910e349... 0% ▕ ▏ 0 B/ 52 GB
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/da/da63a910e34997d50c9f21cc7f16996d1e76e1c128b13319edd68348f760ecc7/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20241219%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20241219T045002Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=4cd2e99e8961c3e4ccb3608d642ee997aad5f591a316ac24dc88a900a629f587": read tcp 172.25.0.2:37068->172.66.0.236:443: read: connection reset by peer
Tried with different ollama versions and models which are tiny as well. Didn't work
With and without VPN same issue. Was working previously.
No extra information in logs. Tried outside and inside of docker container
Your help is needed.
linux (ubuntu)
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8167/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1024
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1024/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1024/comments
|
https://api.github.com/repos/ollama/ollama/issues/1024/events
|
https://github.com/ollama/ollama/issues/1024
| 1,980,359,679
|
I_kwDOJ0Z1Ps52CeP_
| 1,024
|
Packaging ollama + cuda for Arch Linux
|
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 11
| 2023-11-07T01:19:51
| 2023-12-25T21:47:15
| 2023-11-19T13:38:57
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi! Arch Linux package maintainer for the `ollama` and `ollama-cuda` packages here.
Thanks for creating Ollama, it makes LLMs more fun to deal with!
When compiling `v0.1.8` for Arch Linux, using this PKGBUILD:
```bash
pkgname=ollama-cuda
pkgdesc='Create, run and share large language models (LLMs) with CUDA'
pkgver=0.1.8
pkgrel=1
arch=(x86_64)
url='https://github.com/jmorganca/ollama'
license=(MIT)
provides=(ollama)
conflicts=(ollama)
makedepends=(cmake cuda git go setconf)
_ollamacommit=e21579a0f1548e2d1f77411af3df2037c1f144fe # tag: v0.1.8
# The git submodule commit hashes are here:
# https://github.com/jmorganca/ollama/tree/v0.1.8/llm/llama.cpp
_ggmlcommit=9e232f0234073358e7031c1b8d7aa45020469a3b
_ggufcommit=9e70cc03229df19ca2d28ce23cc817198f897278
source=(git+$url#commit=$_ollamacommit
ggml::git+https://github.com/ggerganov/llama.cpp#commit=$_ggmlcommit
gguf::git+https://github.com/ggerganov/llama.cpp#commit=$_ggufcommit)
b2sums=('SKIP'
'SKIP'
'SKIP')
prepare() {
cd ${pkgname/-cuda}
rm -frv llm/llama.cpp/gg{ml,uf}
# Copy git submodule files instead of symlinking because the build process is sensitive to symlinks.
cp -r "$srcdir/ggml" llm/llama.cpp/ggml
cp -r "$srcdir/gguf" llm/llama.cpp/gguf
# Do not git clone when "go generate" is being run.
sed -i 's,git submodule,true,g' llm/llama.cpp/generate_linux.go
# Set the version number
setconf version/version.go 'var Version string' "\"$pkgver\""
}
build() {
export CGO_CFLAGS="$CFLAGS" CGO_CPPFLAGS="$CPPFLAGS" CGO_CXXFLAGS="$CXXFLAGS" CGO_LDFLAGS="$LDFLAGS"
cd ${pkgname/-cuda}
go generate ./...
go build -buildmode=pie -trimpath -mod=readonly -modcacherw -ldflags=-linkmode=external -ldflags=-buildid=''
}
check() {
cd ${pkgname/-cuda}
go test ./...
}
package() {
cd ${pkgname/-cuda}
install -Dm755 ${pkgname/-cuda} "$pkgdir/usr/bin/${pkgname/-cuda}"
install -Dm644 LICENSE "$pkgdir/usr/share/licenses/$pkgname/LICENSE"
}
```
I get:
```
- Found CUDAToolkit: /opt/cuda/include (found version "12.3.52")
-- cuBLAS found
CMake Error at /usr/share/cmake/Modules/CMakeDetermineCompilerId.cmake:753 (message):
Compiling the CUDA compiler identification source file
"CMakeCUDACompilerId.cu" failed.
Compiler: /opt/cuda/bin/nvcc
Build flags:
Id flags: --keep;--keep-dir;tmp -v
The output was:
2
#$ _NVVM_BRANCH_=nvvm
#$ _SPACE_=
#$ _CUDART_=cudart
#$ _HERE_=/opt/cuda/bin
#$ _THERE_=/opt/cuda/bin
#$ _TARGET_SIZE_=
#$ _TARGET_DIR_=
#$ _TARGET_DIR_=targets/x86_64-linux
#$ TOP=/opt/cuda/bin/..
#$ NVVMIR_LIBRARY_DIR=/opt/cuda/bin/../nvvm/libdevice
#$ LD_LIBRARY_PATH=/opt/cuda/bin/../lib:
#$
PATH=/opt/cuda/bin/../nvvm/bin:/opt/cuda/bin:/usr/lib/go/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_pn
#$ INCLUDES="-I/opt/cuda/bin/../targets/x86_64-linux/include"
#$ LIBRARIES= "-L/opt/cuda/bin/../targets/x86_64-linux/lib/stubs"
"-L/opt/cuda/bin/../targets/x86_64-linux/lib"
#$ CUDAFE_FLAGS=
#$ PTXAS_FLAGS=
#$ rm tmp/a_dlink.reg.c
#$ gcc -D__CUDA_ARCH_LIST__=520 -E -x c++ -D__CUDACC__ -D__NVCC__
"-I/opt/cuda/bin/../targets/x86_64-linux/include" -D__CUDACC_VER_MAJOR__=12
-D__CUDACC_VER_MINOR__=3 -D__CUDACC_VER_BUILD__=52
-D__CUDA_API_VER_MAJOR__=12 -D__CUDA_API_VER_MINOR__=3
-D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64
"CMakeCUDACompilerId.cu" -o "tmp/CMakeCUDACompilerId.cpp4.ii"
#$ cudafe++ --c++17 --gnu_version=130201 --display_error_number
--orig_src_file_name "CMakeCUDACompilerId.cu" --orig_src_path_name
"/build/ollama-cuda/src/ollama/llm/llama.cpp/ggml/build/cuda/CMakeFiles/3.27.7/CompilerIdCUDA/CMakeCUDACompilerId.cu"
--allow_managed --m64 --parse_templates --gen_c_file_name
"tmp/CMakeCUDACompilerId.cudafe1.cpp" --stub_file_name
"CMakeCUDACompilerId.cudafe1.stub.c" --gen_module_id_file
--module_id_file_name "tmp/CMakeCUDACompilerId.module_id"
"tmp/CMakeCUDACompilerId.cpp4.ii"
/usr/include/bits/floatn.h(86): error: invalid combination of type
specifiers
typedef __float128 _Float128;
^
/usr/include/bits/floatn-common.h(214): error: invalid combination of type
specifiers
typedef float _Float32;
^
/usr/include/bits/floatn-common.h(251): error: invalid combination of type
specifiers
typedef double _Float64;
^
/usr/include/bits/floatn-common.h(268): error: invalid combination of type
specifiers
typedef double _Float32x;
^
/usr/include/bits/floatn-common.h(285): error: invalid combination of type
specifiers
typedef long double _Float64x;
^
5 errors detected in the compilation of "CMakeCUDACompilerId.cu".
# --error 0x2 --
Call Stack (most recent call first):
/usr/share/cmake/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
/usr/share/cmake/Modules/CMakeDetermineCompilerId.cmake:53 (__determine_compiler_id_test)
/usr/share/cmake/Modules/CMakeDetermineCUDACompiler.cmake:307 (CMAKE_DETERMINE_COMPILER_ID)
CMakeLists.txt:250 (enable_language)
-- Configuring incomplete, errors occurred!
llm/llama.cpp/generate_linux.go:21: running "cmake": exit status 1
```
Building the regular `ollama` package, where `cuda`/`nvcc` are not installed before building, works fine.
Attaching logs.
[ollama-cuda-0.1.8-1-x86_64-build.log](https://github.com/jmorganca/ollama/files/13274355/ollama-cuda-0.1.8-1-x86_64-build.log)
[ollama-cuda-0.1.8-1-x86_64-prepare.log](https://github.com/jmorganca/ollama/files/13274356/ollama-cuda-0.1.8-1-x86_64-prepare.log)
Advice for how to package ollama + cuda for Arch Linux is welcome.
|
{
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/followers",
"following_url": "https://api.github.com/users/xyproto/following{/other_user}",
"gists_url": "https://api.github.com/users/xyproto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyproto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyproto/subscriptions",
"organizations_url": "https://api.github.com/users/xyproto/orgs",
"repos_url": "https://api.github.com/users/xyproto/repos",
"events_url": "https://api.github.com/users/xyproto/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyproto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1024/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1024/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/755
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/755/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/755/comments
|
https://api.github.com/repos/ollama/ollama/issues/755/events
|
https://github.com/ollama/ollama/issues/755
| 1,936,738,736
|
I_kwDOJ0Z1Ps5zcEmw
| 755
|
Ollama re-attempts to pull model when served on a remote server
|
{
"login": "zenarcher007",
"id": 62445349,
"node_id": "MDQ6VXNlcjYyNDQ1MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/62445349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zenarcher007",
"html_url": "https://github.com/zenarcher007",
"followers_url": "https://api.github.com/users/zenarcher007/followers",
"following_url": "https://api.github.com/users/zenarcher007/following{/other_user}",
"gists_url": "https://api.github.com/users/zenarcher007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zenarcher007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zenarcher007/subscriptions",
"organizations_url": "https://api.github.com/users/zenarcher007/orgs",
"repos_url": "https://api.github.com/users/zenarcher007/repos",
"events_url": "https://api.github.com/users/zenarcher007/events{/privacy}",
"received_events_url": "https://api.github.com/users/zenarcher007/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-10-11T03:45:36
| 2023-10-11T03:52:49
| 2023-10-11T03:50:52
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am running the Ollama server on a remote server, streaming the default port "11434" to localhost served via an SSH tunnel. On my local machine, every time the client,`ollama run`, is run, Ollama attempts to pull the model on the server, even if it is already installed, and verify its hash: a process which takes additional time. Since most of the other ollama client commands, such as `ollama list`, work as expected with the remote server configuration, it is expected that `ollama run` would be able to detect that the model is already installed on the server without attempting to re-pull and verify the model.
This produces output such as the following:
```
> ollama run codellama:13b Hello
pulling manifest
pulling a44062a96a2b... 100% |█████████████████| (7.3/7.3 GB, 2.0 TB/s)
pulling 2c8743bdc4ad... 100% |█████████████████| (7.0/7.0 kB, 144 MB/s)
pulling 38fa20ee7daa... 100% |██████████████████| (4.8/4.8 kB, 50 MB/s)
pulling 578a2e81f706... 100% |████████████████████| (95/95 B, 2.5 MB/s)
pulling 404e21afdc6a... 100% |████████████████████| (30/30 B, 870 kB/s)
pulling 9423dcb51326... 100% |███████████████████| (508/508 B, 15 MB/s)
verifying sha256 digest
writing manifest
removing any unused layers
success
Hello! It's nice to meet you. Is there anything in particular you would like to chat about?
```
Without investigating the code too deeply, this section in the run handler of ollama/cmd/cmd.go appears not to work as expected because of this issue, as the model name is not detected, dropping to the PullHandler call below.
```
canonicalModelPath := server.ParseModelPath(args[0])
for _, model := range models.Models {
if model.Name == canonicalModelPath.GetShortTagname() {
return RunGenerate(cmd, args)
}
}
```
|
{
"login": "zenarcher007",
"id": 62445349,
"node_id": "MDQ6VXNlcjYyNDQ1MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/62445349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zenarcher007",
"html_url": "https://github.com/zenarcher007",
"followers_url": "https://api.github.com/users/zenarcher007/followers",
"following_url": "https://api.github.com/users/zenarcher007/following{/other_user}",
"gists_url": "https://api.github.com/users/zenarcher007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zenarcher007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zenarcher007/subscriptions",
"organizations_url": "https://api.github.com/users/zenarcher007/orgs",
"repos_url": "https://api.github.com/users/zenarcher007/repos",
"events_url": "https://api.github.com/users/zenarcher007/events{/privacy}",
"received_events_url": "https://api.github.com/users/zenarcher007/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/755/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1748
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1748/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1748/comments
|
https://api.github.com/repos/ollama/ollama/issues/1748/events
|
https://github.com/ollama/ollama/pull/1748
| 2,060,864,862
|
PR_kwDOJ0Z1Ps5i-V9A
| 1,748
|
[API Client] Feature: Custom HTTP headers for every request
|
{
"login": "secondtruth",
"id": 416441,
"node_id": "MDQ6VXNlcjQxNjQ0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/416441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/secondtruth",
"html_url": "https://github.com/secondtruth",
"followers_url": "https://api.github.com/users/secondtruth/followers",
"following_url": "https://api.github.com/users/secondtruth/following{/other_user}",
"gists_url": "https://api.github.com/users/secondtruth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/secondtruth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/secondtruth/subscriptions",
"organizations_url": "https://api.github.com/users/secondtruth/orgs",
"repos_url": "https://api.github.com/users/secondtruth/repos",
"events_url": "https://api.github.com/users/secondtruth/events{/privacy}",
"received_events_url": "https://api.github.com/users/secondtruth/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2023-12-30T21:47:00
| 2024-04-21T22:58:12
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1748",
"html_url": "https://github.com/ollama/ollama/pull/1748",
"diff_url": "https://github.com/ollama/ollama/pull/1748.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1748.patch",
"merged_at": null
}
|
This pull request adds the ability to include a custom header to every request made by the Go API Client. The custom header is set using a new `Header` field in the `Client` struct.
This allows for more flexibility and customization when making requests to the API. My use case is setting an `Authorization` header to authenticate at my API proxy.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1748/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/292
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/292/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/292/comments
|
https://api.github.com/repos/ollama/ollama/issues/292/events
|
https://github.com/ollama/ollama/pull/292
| 1,837,896,002
|
PR_kwDOJ0Z1Ps5XQatd
| 292
|
Added a Websocket interface
|
{
"login": "FultonBrowne",
"id": 50185337,
"node_id": "MDQ6VXNlcjUwMTg1MzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/50185337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FultonBrowne",
"html_url": "https://github.com/FultonBrowne",
"followers_url": "https://api.github.com/users/FultonBrowne/followers",
"following_url": "https://api.github.com/users/FultonBrowne/following{/other_user}",
"gists_url": "https://api.github.com/users/FultonBrowne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FultonBrowne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FultonBrowne/subscriptions",
"organizations_url": "https://api.github.com/users/FultonBrowne/orgs",
"repos_url": "https://api.github.com/users/FultonBrowne/repos",
"events_url": "https://api.github.com/users/FultonBrowne/events{/privacy}",
"received_events_url": "https://api.github.com/users/FultonBrowne/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-08-05T18:48:38
| 2024-02-09T14:36:50
| 2023-08-09T02:02:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/292",
"html_url": "https://github.com/ollama/ollama/pull/292",
"diff_url": "https://github.com/ollama/ollama/pull/292.diff",
"patch_url": "https://github.com/ollama/ollama/pull/292.patch",
"merged_at": null
}
|
This adds an api endpoint "/api/generatestream" that serves a Websocket that receives JSON in the same format as the "/api/generate" endpoint.
Why?
Well for something I'm working on and I image other things people will do later on a more "stream" style interface may be useful and may provide a better user experience.
|
{
"login": "FultonBrowne",
"id": 50185337,
"node_id": "MDQ6VXNlcjUwMTg1MzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/50185337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FultonBrowne",
"html_url": "https://github.com/FultonBrowne",
"followers_url": "https://api.github.com/users/FultonBrowne/followers",
"following_url": "https://api.github.com/users/FultonBrowne/following{/other_user}",
"gists_url": "https://api.github.com/users/FultonBrowne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FultonBrowne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FultonBrowne/subscriptions",
"organizations_url": "https://api.github.com/users/FultonBrowne/orgs",
"repos_url": "https://api.github.com/users/FultonBrowne/repos",
"events_url": "https://api.github.com/users/FultonBrowne/events{/privacy}",
"received_events_url": "https://api.github.com/users/FultonBrowne/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/292/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7485
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7485/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7485/comments
|
https://api.github.com/repos/ollama/ollama/issues/7485/events
|
https://github.com/ollama/ollama/issues/7485
| 2,631,633,971
|
I_kwDOJ0Z1Ps6c24wz
| 7,485
|
I hope ollama can provide rerank models and speech recognition models.
|
{
"login": "ardyli",
"id": 10685140,
"node_id": "MDQ6VXNlcjEwNjg1MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/10685140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ardyli",
"html_url": "https://github.com/ardyli",
"followers_url": "https://api.github.com/users/ardyli/followers",
"following_url": "https://api.github.com/users/ardyli/following{/other_user}",
"gists_url": "https://api.github.com/users/ardyli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ardyli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ardyli/subscriptions",
"organizations_url": "https://api.github.com/users/ardyli/orgs",
"repos_url": "https://api.github.com/users/ardyli/repos",
"events_url": "https://api.github.com/users/ardyli/events{/privacy}",
"received_events_url": "https://api.github.com/users/ardyli/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-11-04T01:26:25
| 2024-12-19T13:46:11
| 2024-11-17T14:13:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
希望ollama能提供rerank模型、能提供语音识别模型。
I hope ollama can provide rerank models and speech recognition models.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7485/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7485/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/6157
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6157/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6157/comments
|
https://api.github.com/repos/ollama/ollama/issues/6157/events
|
https://github.com/ollama/ollama/issues/6157
| 2,446,898,762
|
I_kwDOJ0Z1Ps6R2LZK
| 6,157
|
always "Error: something went wrong, please see the ollama server logs for details" but no useful info in service log
|
{
"login": "EachSheep",
"id": 48718428,
"node_id": "MDQ6VXNlcjQ4NzE4NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/48718428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EachSheep",
"html_url": "https://github.com/EachSheep",
"followers_url": "https://api.github.com/users/EachSheep/followers",
"following_url": "https://api.github.com/users/EachSheep/following{/other_user}",
"gists_url": "https://api.github.com/users/EachSheep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EachSheep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EachSheep/subscriptions",
"organizations_url": "https://api.github.com/users/EachSheep/orgs",
"repos_url": "https://api.github.com/users/EachSheep/repos",
"events_url": "https://api.github.com/users/EachSheep/events{/privacy}",
"received_events_url": "https://api.github.com/users/EachSheep/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-08-04T07:31:56
| 2024-08-06T06:01:00
| 2024-08-06T06:01:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I encountered an insufficient storage space error while downloading llama3.1:70b-instruct-fp16. To resolve this, I backed up the files from `/usr/share/ollama/.ollama/models` to another drive with more space, located at `/users/shared/ollama/.ollama/models`, and configured `/etc/systemd/system/ollama.service.d/override_ollama.service` as follows:
```
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_KEEP_ALIVE=5m"
Environment="OLLAMA_MODELS=/users/shared/ollama/.ollama/models"
```
Then, I deleted the original `/usr/share/ollama/` directory, recreated the `/usr/share/ollama` directory, and changed its owner and group with `sudo chown ollama:ollama /usr/share/ollama`. After that, I reloaded the service configuration and restarted the service using `sudo systemctl daemon-reload` and `sudo systemctl restart ollama`.
However, when running `ollama list`, I continuously received an error, as did other `ollama` commands, stating `Error: something went wrong, please see the ollama server logs for details`.
I checked the `ollama` logs using `sudo journalctl -u ollama.service > ollama_logs.txt`. The logs after restarting the service were:
```
Aug 04 07:24:50 node2 systemd[1]: Stopping Ollama Service...
Aug 04 07:24:50 node2 systemd[1]: ollama.service: Succeeded.
Aug 04 07:24:50 node2 systemd[1]: Stopped Ollama Service.
Aug 04 07:24:50 node2 systemd[1]: Started Ollama Service.
Aug 04 07:24:50 node2 ollama[78522]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Aug 04 07:24:50 node2 ollama[78522]: Your new public key is:
Aug 04 07:24:50 node2 ollama[78522]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8ECjLBExZOAz8FOGXuADTif9I8RIatZmmI11P2TzCh
Aug 04 07:24:50 node2 ollama[78522]: 2024/08/04 07:24:50 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS=/users/shared/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.954Z level=INFO source=images.go:781 msg="total blobs: 14"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.954Z level=INFO source=images.go:788 msg="total unused blobs removed: 0"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.955Z level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.3)"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.955Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama873184855/runners
```
There are no other error messages. This issue has been troubling me for half a day, and I am quite frustrated. I hope you can help me look into it.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.3
|
{
"login": "EachSheep",
"id": 48718428,
"node_id": "MDQ6VXNlcjQ4NzE4NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/48718428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EachSheep",
"html_url": "https://github.com/EachSheep",
"followers_url": "https://api.github.com/users/EachSheep/followers",
"following_url": "https://api.github.com/users/EachSheep/following{/other_user}",
"gists_url": "https://api.github.com/users/EachSheep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EachSheep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EachSheep/subscriptions",
"organizations_url": "https://api.github.com/users/EachSheep/orgs",
"repos_url": "https://api.github.com/users/EachSheep/repos",
"events_url": "https://api.github.com/users/EachSheep/events{/privacy}",
"received_events_url": "https://api.github.com/users/EachSheep/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6157/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5496
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5496/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5496/comments
|
https://api.github.com/repos/ollama/ollama/issues/5496/events
|
https://github.com/ollama/ollama/issues/5496
| 2,392,126,141
|
I_kwDOJ0Z1Ps6OlPK9
| 5,496
|
why GGML_CUDA_FORCE_MMQ is yes and CUDA_USE_TENSOR_CORES is no
|
{
"login": "rabbitpig",
"id": 2839943,
"node_id": "MDQ6VXNlcjI4Mzk5NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2839943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabbitpig",
"html_url": "https://github.com/rabbitpig",
"followers_url": "https://api.github.com/users/rabbitpig/followers",
"following_url": "https://api.github.com/users/rabbitpig/following{/other_user}",
"gists_url": "https://api.github.com/users/rabbitpig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabbitpig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabbitpig/subscriptions",
"organizations_url": "https://api.github.com/users/rabbitpig/orgs",
"repos_url": "https://api.github.com/users/rabbitpig/repos",
"events_url": "https://api.github.com/users/rabbitpig/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabbitpig/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-05T08:30:32
| 2024-10-24T02:39:37
| 2024-10-24T02:39:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi:
I am a newbie and curious about how these configs come out.

Someone can tell me how to chane these.
Thanks.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5496/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5496/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7638
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7638/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7638/comments
|
https://api.github.com/repos/ollama/ollama/issues/7638/events
|
https://github.com/ollama/ollama/issues/7638
| 2,653,722,091
|
I_kwDOJ0Z1Ps6eLJXr
| 7,638
|
I built from source a while back and everything worked ok, the current code is dumping core, bus error.
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g",
"url": "https://api.github.com/repos/ollama/ollama/labels/build",
"name": "build",
"color": "006b75",
"default": false,
"description": "Issues relating to building ollama from source"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-11-13T00:18:17
| 2024-12-10T19:54:21
| 2024-12-10T19:54:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I used to build it with
go generate ./...
go build .
Is it different now? Does it detect automatically CUDA at /usr/local/cuda?
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
cloned github latest
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7638/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4233
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4233/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4233/comments
|
https://api.github.com/repos/ollama/ollama/issues/4233/events
|
https://github.com/ollama/ollama/issues/4233
| 2,283,910,906
|
I_kwDOJ0Z1Ps6IIbb6
| 4,233
|
Allow for specifying and checking the directory where models are stored
|
{
"login": "K0-RR",
"id": 43277609,
"node_id": "MDQ6VXNlcjQzMjc3NjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/43277609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/K0-RR",
"html_url": "https://github.com/K0-RR",
"followers_url": "https://api.github.com/users/K0-RR/followers",
"following_url": "https://api.github.com/users/K0-RR/following{/other_user}",
"gists_url": "https://api.github.com/users/K0-RR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/K0-RR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/K0-RR/subscriptions",
"organizations_url": "https://api.github.com/users/K0-RR/orgs",
"repos_url": "https://api.github.com/users/K0-RR/repos",
"events_url": "https://api.github.com/users/K0-RR/events{/privacy}",
"received_events_url": "https://api.github.com/users/K0-RR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-07T17:48:54
| 2024-05-07T18:37:55
| 2024-05-07T18:37:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How do I locate the downloaded files? This should be possible via some `ollama` command.
When pulling we should be asked where do we want the data to be saved.
Both `~/.ollama/models` and `/usr/share/ollama/` **don't exist** on my machine and yet I'm interacting with llama3!
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4233/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2399
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2399/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2399/comments
|
https://api.github.com/repos/ollama/ollama/issues/2399/events
|
https://github.com/ollama/ollama/pull/2399
| 2,124,064,224
|
PR_kwDOJ0Z1Ps5mUTW9
| 2,399
|
Fix hanging issue when sending empty content
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-07T23:17:05
| 2024-02-08T00:30:34
| 2024-02-08T00:30:33
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2399",
"html_url": "https://github.com/ollama/ollama/pull/2399",
"diff_url": "https://github.com/ollama/ollama/pull/2399.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2399.patch",
"merged_at": "2024-02-08T00:30:33"
}
|
This fixes an issue where the prompt would be templated as an empty string `""`.
Fixes https://github.com/ollama/ollama/issues/2397
```shell
# loads model
% curl http://localhost:11434/api/chat -d '{
"model": "llama2",
"messages": [],
"stream": false
}'
{"model":"llama2","created_at":"2024-02-07T23:21:14.816749Z","message":{"role":"assistant","content":""},"done":true}
```
```shell
# loads model
% curl http://localhost:11434/api/chat -d '{
"model": "llama2",
"messages": [{ "role": "user", "content": ""}],
"stream": false
}'
{"model":"llama2","created_at":"2024-02-07T23:20:28.175454Z","message": {"role":"assistant","content":""},"done":true}
```
```shell
# still works
% curl http://localhost:11434/api/chat -d '{
"model": "llama2",
"messages": [{ "role": "system", "content": "sing me a song!"}],
"stream": false
}'
{"model":"llama2","created_at":"2024-02-07T23:19:01.582144Z","message":{"role":"assistant","content":"Of course, I'd be happy to sing you a song! *clears throat* Here we go:\n\n\"Oh, the stars are shining bright and bold,\nA celestial show, so fine and cold.\nThe moon is smiling, its light so pure,\nA gentle melody, for you I endure.\n\nIn the night's embrace, I find my peace,\nThe world outside, a distant release.\nI lose myself in the music of the night,\nAnd let my spirit take flight.\n\nSo come and join me, in this song so bright,\nTogether we'll dance, under the stars' delight.\nWith every note, our hearts will be as one,\nIn this magical moment, we'll have fun.\"\n\nHow was that? Did you enjoy it?"},"done":true,"total_duration":5454697000,"load_duration":1188958,"prompt_eval_duration":177470000,"eval_count":182,"eval_duration":5275568000}
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2399/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6270
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6270/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6270/comments
|
https://api.github.com/repos/ollama/ollama/issues/6270/events
|
https://github.com/ollama/ollama/issues/6270
| 2,456,912,746
|
I_kwDOJ0Z1Ps6ScYNq
| 6,270
|
ollama does not work continuously
|
{
"login": "peanutpaste",
"id": 62008312,
"node_id": "MDQ6VXNlcjYyMDA4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/62008312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peanutpaste",
"html_url": "https://github.com/peanutpaste",
"followers_url": "https://api.github.com/users/peanutpaste/followers",
"following_url": "https://api.github.com/users/peanutpaste/following{/other_user}",
"gists_url": "https://api.github.com/users/peanutpaste/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peanutpaste/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peanutpaste/subscriptions",
"organizations_url": "https://api.github.com/users/peanutpaste/orgs",
"repos_url": "https://api.github.com/users/peanutpaste/repos",
"events_url": "https://api.github.com/users/peanutpaste/events{/privacy}",
"received_events_url": "https://api.github.com/users/peanutpaste/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-08-09T01:12:40
| 2024-09-05T23:01:31
| 2024-09-05T23:01:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
After I start the ollama service, it will freeze when I run it for a while. When I access it, it will keep loading, but there is no result. I kill the thread, and after restarting it, it can be used for a while (or a certain number of accesses), and then it will freeze again. Why is this?
My Divice is A6000(48G)
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="OLLAMA_MODELS=/home/q/ollama_models"
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_NUM_PARALLEL=32"
Environment="OLLAMA_MAX_QUEUE=2048"
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6270/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2925
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2925/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2925/comments
|
https://api.github.com/repos/ollama/ollama/issues/2925/events
|
https://github.com/ollama/ollama/pull/2925
| 2,167,724,263
|
PR_kwDOJ0Z1Ps5ooyyH
| 2,925
|
[ENH]: OpenAI embeddings endpoint
|
{
"login": "tazarov",
"id": 1157440,
"node_id": "MDQ6VXNlcjExNTc0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1157440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tazarov",
"html_url": "https://github.com/tazarov",
"followers_url": "https://api.github.com/users/tazarov/followers",
"following_url": "https://api.github.com/users/tazarov/following{/other_user}",
"gists_url": "https://api.github.com/users/tazarov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tazarov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tazarov/subscriptions",
"organizations_url": "https://api.github.com/users/tazarov/orgs",
"repos_url": "https://api.github.com/users/tazarov/repos",
"events_url": "https://api.github.com/users/tazarov/events{/privacy}",
"received_events_url": "https://api.github.com/users/tazarov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-04T20:50:07
| 2024-11-21T09:24:36
| 2024-11-21T09:24:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2925",
"html_url": "https://github.com/ollama/ollama/pull/2925",
"diff_url": "https://github.com/ollama/ollama/pull/2925.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2925.patch",
"merged_at": null
}
|
Closes #2856
> Note: This is a stacked PR on top of #2924
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2925/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2925/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8468
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8468/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8468/comments
|
https://api.github.com/repos/ollama/ollama/issues/8468/events
|
https://github.com/ollama/ollama/issues/8468
| 2,794,695,100
|
I_kwDOJ0Z1Ps6mk6m8
| 8,468
|
.create() giving out ResponseError
|
{
"login": "ShivamSrng",
"id": 67229090,
"node_id": "MDQ6VXNlcjY3MjI5MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/67229090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShivamSrng",
"html_url": "https://github.com/ShivamSrng",
"followers_url": "https://api.github.com/users/ShivamSrng/followers",
"following_url": "https://api.github.com/users/ShivamSrng/following{/other_user}",
"gists_url": "https://api.github.com/users/ShivamSrng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShivamSrng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShivamSrng/subscriptions",
"organizations_url": "https://api.github.com/users/ShivamSrng/orgs",
"repos_url": "https://api.github.com/users/ShivamSrng/repos",
"events_url": "https://api.github.com/users/ShivamSrng/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShivamSrng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 7
| 2025-01-17T06:35:19
| 2025-01-18T05:01:27
| 2025-01-17T07:26:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**OS**
Windows
**GPU**
Nvidia 3070Ti
NVIDIA-SMI 566.36
Driver Version: 566.36
CUDA Version: 12.7
**CPU**
Intel
**Ollama version**
0.5.7
**Model**
llama3.1:8b
The model file I am writing is:
```
modelfile = """
FROM llama3.1
PARAMETER seed 42
PARAMETER temperature 0
SYSTEM ....the prompt .....
"""
ollama.create(
model="a_generic_model_name",
modelfile=modelfile
)
```
I am using Ollama via Python using the library mentioned in the documentation. While passing the prompt, I continuously face this issue: **ResponseError: neither 'from' nor 'files' was specified**. Can I get some suggestions on how to overcome this issue?
Thanks
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8468/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6195
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6195/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6195/comments
|
https://api.github.com/repos/ollama/ollama/issues/6195/events
|
https://github.com/ollama/ollama/issues/6195
| 2,450,174,184
|
I_kwDOJ0Z1Ps6SCrDo
| 6,195
|
When I start the container with http_proxy and https_proxy configured, the ollama service will not start properly
|
{
"login": "0sengseng0",
"id": 73268510,
"node_id": "MDQ6VXNlcjczMjY4NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/73268510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0sengseng0",
"html_url": "https://github.com/0sengseng0",
"followers_url": "https://api.github.com/users/0sengseng0/followers",
"following_url": "https://api.github.com/users/0sengseng0/following{/other_user}",
"gists_url": "https://api.github.com/users/0sengseng0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0sengseng0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0sengseng0/subscriptions",
"organizations_url": "https://api.github.com/users/0sengseng0/orgs",
"repos_url": "https://api.github.com/users/0sengseng0/repos",
"events_url": "https://api.github.com/users/0sengseng0/events{/privacy}",
"received_events_url": "https://api.github.com/users/0sengseng0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-08-06T07:38:45
| 2024-08-23T21:07:02
| 2024-08-23T21:07:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
## start command
docker run -d --gpus=all -v ollama:/root/.ollama -p 31434:11434 -e "OLLAMA_DEBUG=1" -e "CUDA_VISIBLE_DEVICES=0" -e "http_proxy=http://192.168.*.*:11080" -e "https_proxy=http://192.168.*.*:11080" --name ollama ollama/ollama
## operating command
[root@main ~]# docker exec -it ollama bash
root@5673350a5bbd:/# ollama list
Error: something went wrong, please see the ollama server logs for details
root@5673350a5bbd:/# ollama --version
Warning: could not connect to a running Ollama instance
Warning: client version is 0.3.3
## Container logging
[root@main git]# docker logs -f ollama
2024/08/06 07:30:27 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-06T07:30:27.744Z level=INFO source=images.go:781 msg="total blobs: 0"
time=2024-08-06T07:30:27.744Z level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-06T07:30:27.745Z level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.3)"
time=2024-08-06T07:30:27.746Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2311459276/runners
time=2024-08-06T07:30:27.746Z level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-08-06T07:30:27.746Z level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-08-06T07:30:27.746Z level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-08-06T07:30:27.746Z level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz
time=2024-08-06T07:30:27.746Z level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz
time=2024-08-06T07:30:27.746Z level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz
time=2024-08-06T07:30:27.746Z level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz
time=2024-08-06T07:30:27.746Z level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60102 file=build/linux/x86_64/rocm_v60102/bin/deps.txt.gz
time=2024-08-06T07:30:27.746Z level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60102 file=build/linux/x86_64/rocm_v60102/bin/ollama_llama_server.gz
time=2024-08-06T07:30:35.271Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2311459276/runners/cpu/ollama_llama_server
time=2024-08-06T07:30:35.271Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2311459276/runners/cpu_avx/ollama_llama_server
time=2024-08-06T07:30:35.271Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2311459276/runners/cpu_avx2/ollama_llama_server
time=2024-08-06T07:30:35.271Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2311459276/runners/cuda_v11/ollama_llama_server
time=2024-08-06T07:30:35.271Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2311459276/runners/rocm_v60102/ollama_llama_server
time=2024-08-06T07:30:35.271Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
time=2024-08-06T07:30:35.271Z level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-08-06T07:30:35.271Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-08-06T07:30:35.271Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-08-06T07:30:35.272Z level=DEBUG source=gpu.go:91 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-08-06T07:30:35.272Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so*
time=2024-08-06T07:30:35.272Z level=DEBUG source=gpu.go:487 msg="gpu library search" globs="[/usr/local/nvidia/lib/libcuda.so** /usr/local/nvidia/lib64/libcuda.so** /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-08-06T07:30:35.273Z level=DEBUG source=gpu.go:521 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.535.154.05]
CUDA driver version: 12.2
time=2024-08-06T07:30:35.287Z level=DEBUG source=gpu.go:124 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.154.05
[GPU-dfcbde6c-a5c4-f8b6-e56d-5b0d63559daf] CUDA totalMem 24217 mb
[GPU-dfcbde6c-a5c4-f8b6-e56d-5b0d63559daf] CUDA freeMem 5622 mb
[GPU-dfcbde6c-a5c4-f8b6-e56d-5b0d63559daf] Compute Capability 8.9
time=2024-08-06T07:30:35.467Z level=DEBUG source=amd_linux.go:371 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-08-06T07:30:35.467Z level=INFO source=types.go:105 msg="inference compute" id=GPU-dfcbde6c-a5c4-f8b6-e56d-5b0d63559daf library=cuda compute=8.9 driver=12.2 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="5.5 GiB"
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.3
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6195/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6087
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6087/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6087/comments
|
https://api.github.com/repos/ollama/ollama/issues/6087/events
|
https://github.com/ollama/ollama/issues/6087
| 2,439,021,540
|
I_kwDOJ0Z1Ps6RYIPk
| 6,087
|
hope to add "Telechat"
|
{
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enryteam/followers",
"following_url": "https://api.github.com/users/enryteam/following{/other_user}",
"gists_url": "https://api.github.com/users/enryteam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enryteam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enryteam/subscriptions",
"organizations_url": "https://api.github.com/users/enryteam/orgs",
"repos_url": "https://api.github.com/users/enryteam/repos",
"events_url": "https://api.github.com/users/enryteam/events{/privacy}",
"received_events_url": "https://api.github.com/users/enryteam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-07-31T03:15:51
| 2024-10-21T04:28:13
| 2024-10-21T04:28:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
https://github.com/Tele-AI/Telechat
|
{
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enryteam/followers",
"following_url": "https://api.github.com/users/enryteam/following{/other_user}",
"gists_url": "https://api.github.com/users/enryteam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enryteam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enryteam/subscriptions",
"organizations_url": "https://api.github.com/users/enryteam/orgs",
"repos_url": "https://api.github.com/users/enryteam/repos",
"events_url": "https://api.github.com/users/enryteam/events{/privacy}",
"received_events_url": "https://api.github.com/users/enryteam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6087/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1844
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1844/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1844/comments
|
https://api.github.com/repos/ollama/ollama/issues/1844/events
|
https://github.com/ollama/ollama/pull/1844
| 2,069,230,947
|
PR_kwDOJ0Z1Ps5jaQfi
| 1,844
|
Workaround memory memory limitations
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-07T18:12:47
| 2024-01-08T21:03:26
| 2024-01-08T21:03:25
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1844",
"html_url": "https://github.com/ollama/ollama/pull/1844",
"diff_url": "https://github.com/ollama/ollama/pull/1844.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1844.patch",
"merged_at": null
}
|
This isn't a proper fix, but until we more completely calculate memory requirements, this seems to avoid crashes when approaching the limit on smaller memory CUDA GPUs.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1844/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3040
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3040/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3040/comments
|
https://api.github.com/repos/ollama/ollama/issues/3040/events
|
https://github.com/ollama/ollama/issues/3040
| 2,177,750,063
|
I_kwDOJ0Z1Ps6BzdQv
| 3,040
|
cant Pull a model from Ollama.com
|
{
"login": "Than281",
"id": 66867089,
"node_id": "MDQ6VXNlcjY2ODY3MDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/66867089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Than281",
"html_url": "https://github.com/Than281",
"followers_url": "https://api.github.com/users/Than281/followers",
"following_url": "https://api.github.com/users/Than281/following{/other_user}",
"gists_url": "https://api.github.com/users/Than281/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Than281/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Than281/subscriptions",
"organizations_url": "https://api.github.com/users/Than281/orgs",
"repos_url": "https://api.github.com/users/Than281/repos",
"events_url": "https://api.github.com/users/Than281/events{/privacy}",
"received_events_url": "https://api.github.com/users/Than281/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-10T14:38:12
| 2024-03-10T16:03:44
| 2024-03-10T16:03:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
when I enter `ollama run llama2-uncensored`or other model tab ,there's always `pull model manifest: file does not exist`
|
{
"login": "Than281",
"id": 66867089,
"node_id": "MDQ6VXNlcjY2ODY3MDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/66867089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Than281",
"html_url": "https://github.com/Than281",
"followers_url": "https://api.github.com/users/Than281/followers",
"following_url": "https://api.github.com/users/Than281/following{/other_user}",
"gists_url": "https://api.github.com/users/Than281/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Than281/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Than281/subscriptions",
"organizations_url": "https://api.github.com/users/Than281/orgs",
"repos_url": "https://api.github.com/users/Than281/repos",
"events_url": "https://api.github.com/users/Than281/events{/privacy}",
"received_events_url": "https://api.github.com/users/Than281/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3040/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2969
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2969/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2969/comments
|
https://api.github.com/repos/ollama/ollama/issues/2969/events
|
https://github.com/ollama/ollama/pull/2969
| 2,172,810,064
|
PR_kwDOJ0Z1Ps5o6Ocn
| 2,969
|
Hijack the build for msys2 :P
|
{
"login": "Kreijstal",
"id": 2415206,
"node_id": "MDQ6VXNlcjI0MTUyMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2415206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kreijstal",
"html_url": "https://github.com/Kreijstal",
"followers_url": "https://api.github.com/users/Kreijstal/followers",
"following_url": "https://api.github.com/users/Kreijstal/following{/other_user}",
"gists_url": "https://api.github.com/users/Kreijstal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kreijstal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kreijstal/subscriptions",
"organizations_url": "https://api.github.com/users/Kreijstal/orgs",
"repos_url": "https://api.github.com/users/Kreijstal/repos",
"events_url": "https://api.github.com/users/Kreijstal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kreijstal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-07T02:25:53
| 2024-11-21T08:17:51
| 2024-11-21T08:17:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2969",
"html_url": "https://github.com/ollama/ollama/pull/2969",
"diff_url": "https://github.com/ollama/ollama/pull/2969.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2969.patch",
"merged_at": null
}
|
This makes it so that you can build with MINGW/GCC in windows rather than with MSVC.
fixes #2887
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2969/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4910
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4910/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4910/comments
|
https://api.github.com/repos/ollama/ollama/issues/4910/events
|
https://github.com/ollama/ollama/pull/4910
| 2,340,773,570
|
PR_kwDOJ0Z1Ps5x0N5M
| 4,910
|
fix create model when template detection errors
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-07T16:06:19
| 2024-06-07T18:07:40
| 2024-06-07T18:07:39
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4910",
"html_url": "https://github.com/ollama/ollama/pull/4910",
"diff_url": "https://github.com/ollama/ollama/pull/4910.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4910.patch",
"merged_at": "2024-06-07T18:07:39"
}
|
this change fixes a bug where a template detection error will cause create to error instead of not including a template
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4910/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1840
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1840/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1840/comments
|
https://api.github.com/repos/ollama/ollama/issues/1840/events
|
https://github.com/ollama/ollama/pull/1840
| 2,069,129,706
|
PR_kwDOJ0Z1Ps5jZ8oq
| 1,840
|
docker-commpose added
|
{
"login": "jinnabaalu",
"id": 11784253,
"node_id": "MDQ6VXNlcjExNzg0MjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/11784253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinnabaalu",
"html_url": "https://github.com/jinnabaalu",
"followers_url": "https://api.github.com/users/jinnabaalu/followers",
"following_url": "https://api.github.com/users/jinnabaalu/following{/other_user}",
"gists_url": "https://api.github.com/users/jinnabaalu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinnabaalu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinnabaalu/subscriptions",
"organizations_url": "https://api.github.com/users/jinnabaalu/orgs",
"repos_url": "https://api.github.com/users/jinnabaalu/repos",
"events_url": "https://api.github.com/users/jinnabaalu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinnabaalu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-07T13:25:28
| 2024-09-05T20:23:17
| 2024-09-05T20:23:17
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1840",
"html_url": "https://github.com/ollama/ollama/pull/1840",
"diff_url": "https://github.com/ollama/ollama/pull/1840.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1840.patch",
"merged_at": null
}
|
Created a docker-compose for running as a container
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1840/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6679
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6679/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6679/comments
|
https://api.github.com/repos/ollama/ollama/issues/6679/events
|
https://github.com/ollama/ollama/issues/6679
| 2,510,663,593
|
I_kwDOJ0Z1Ps6Vpa-p
| 6,679
|
HTTP_PROXY Not Being Used in Model Requests
|
{
"login": "cmilhaupt",
"id": 15024866,
"node_id": "MDQ6VXNlcjE1MDI0ODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/15024866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmilhaupt",
"html_url": "https://github.com/cmilhaupt",
"followers_url": "https://api.github.com/users/cmilhaupt/followers",
"following_url": "https://api.github.com/users/cmilhaupt/following{/other_user}",
"gists_url": "https://api.github.com/users/cmilhaupt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmilhaupt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmilhaupt/subscriptions",
"organizations_url": "https://api.github.com/users/cmilhaupt/orgs",
"repos_url": "https://api.github.com/users/cmilhaupt/repos",
"events_url": "https://api.github.com/users/cmilhaupt/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmilhaupt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 30
| 2024-09-06T15:15:22
| 2024-09-18T19:53:56
| 2024-09-18T19:53:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The `HTTP_PROXY` and `HTTPS_PROXY` variables aren't being used when requesting the model manifest file and pulling of the model itself. The symptom that lead me to the extra debugging started when trying to pull a model and getting the following error:
```
$ ollama pull gemma2:27b
pulling manifest
Error: pull model manifest: invalid character '<' looking for beginning of value
```
This error occurred for any model I tried to pull. Cloning the software and building according to the developer instructions, I was able to print out the full body of the response which revealed an HTML page (starting with `<html>`) being returned by my corporate proxy. I have both `HTTP_PROXY` and `HTTPS_PROXY` set in /etc/environments and other Go programs are able to see them. What ended up working for me was hard-coding the proxy directly into ollama:
```
diff --git a/server/download.go b/server/download.go
index 02f7ae88..71ae8329 100644
--- a/server/download.go
+++ b/server/download.go
@@ -330,7 +330,12 @@ func (b *blobDownload) downloadChunk(ctx context.Context, requestURL *url.URL, w
return err
}
req.Header.Set("Range", fmt.Sprintf("bytes=%d-%d", part.StartsAt(), part.StopsAt()-1))
- resp, err := http.DefaultClient.Do(req)
+
+ u, err := url.Parse("http://myproxy:port")
+ http_client := &http.Client{
+ Transport: &http.Transport{Proxy: http.ProxyURL(u)},
+ }
+ resp, err := http_client.Do(req)
if err != nil {
return err
}
diff --git a/server/images.go b/server/images.go
index b5bf7ad6..bdbd01fd 100644
--- a/server/images.go
+++ b/server/images.go
@@ -1098,9 +1098,14 @@ func makeRequest(ctx context.Context, method string, requestURL *url.URL, header
req.ContentLength = contentLength
}
- resp, err := (&http.Client{
+ u, err := url.Parse("http://myproxy:port")
+ http_client := &http.Client{
+ Transport: &http.Transport{Proxy: http.ProxyURL(u)},
CheckRedirect: regOpts.CheckRedirect,
- }).Do(req)
+ }
+
+ resp, err := http_client.Do(req)
+
if err != nil {
return nil, err
}
```
I'm still new-ish to golang so I'm not sure what the proper fix is here besides knowing it isn't what I've provided. If it helps, I'm using go v1.23.0. Some light googling pointed me to the idea that it's possible to instantiate a default http.Transport struct which should include the proxy information, so I'm sure there are more generic ways to apply this patch. I'm happy to open the MR with a little guidance on what the generic solution would be.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
v0.3.9
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6679/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/405
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/405/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/405/comments
|
https://api.github.com/repos/ollama/ollama/issues/405/events
|
https://github.com/ollama/ollama/pull/405
| 1,865,604,112
|
PR_kwDOJ0Z1Ps5Yt9Xm
| 405
|
add 34b model type
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-24T17:36:03
| 2023-08-24T17:37:23
| 2023-08-24T17:37:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/405",
"html_url": "https://github.com/ollama/ollama/pull/405",
"diff_url": "https://github.com/ollama/ollama/pull/405.diff",
"patch_url": "https://github.com/ollama/ollama/pull/405.patch",
"merged_at": "2023-08-24T17:37:22"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/405/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8487
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8487/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8487/comments
|
https://api.github.com/repos/ollama/ollama/issues/8487/events
|
https://github.com/ollama/ollama/pull/8487
| 2,797,713,644
|
PR_kwDOJ0Z1Ps6IRdY9
| 8,487
|
Update gpu.md
|
{
"login": "EndoTheDev",
"id": 186935988,
"node_id": "U_kgDOCyRqtA",
"avatar_url": "https://avatars.githubusercontent.com/u/186935988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EndoTheDev",
"html_url": "https://github.com/EndoTheDev",
"followers_url": "https://api.github.com/users/EndoTheDev/followers",
"following_url": "https://api.github.com/users/EndoTheDev/following{/other_user}",
"gists_url": "https://api.github.com/users/EndoTheDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EndoTheDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EndoTheDev/subscriptions",
"organizations_url": "https://api.github.com/users/EndoTheDev/orgs",
"repos_url": "https://api.github.com/users/EndoTheDev/repos",
"events_url": "https://api.github.com/users/EndoTheDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/EndoTheDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-19T16:33:00
| 2025-01-20T03:15:14
| 2025-01-20T02:45:35
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8487",
"html_url": "https://github.com/ollama/ollama/pull/8487",
"diff_url": "https://github.com/ollama/ollama/pull/8487.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8487.patch",
"merged_at": "2025-01-20T02:45:35"
}
|
changed Laptop to Linux since it does not only affect laptop user but Linux user in general
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8487/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1837
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1837/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1837/comments
|
https://api.github.com/repos/ollama/ollama/issues/1837/events
|
https://github.com/ollama/ollama/issues/1837
| 2,069,046,483
|
I_kwDOJ0Z1Ps57UyTT
| 1,837
|
Ollama crashes quite often for Fedora 39 with NVIDIA T1200 Laptop GPU
|
{
"login": "ilovepumpkin",
"id": 11310645,
"node_id": "MDQ6VXNlcjExMzEwNjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/11310645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ilovepumpkin",
"html_url": "https://github.com/ilovepumpkin",
"followers_url": "https://api.github.com/users/ilovepumpkin/followers",
"following_url": "https://api.github.com/users/ilovepumpkin/following{/other_user}",
"gists_url": "https://api.github.com/users/ilovepumpkin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ilovepumpkin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ilovepumpkin/subscriptions",
"organizations_url": "https://api.github.com/users/ilovepumpkin/orgs",
"repos_url": "https://api.github.com/users/ilovepumpkin/repos",
"events_url": "https://api.github.com/users/ilovepumpkin/events{/privacy}",
"received_events_url": "https://api.github.com/users/ilovepumpkin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-01-07T08:53:07
| 2024-01-08T21:42:04
| 2024-01-08T21:42:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
When I use ollama with NVIDIA T1200 Laptop GPU on Fedora 39, it crashes quite often regardless what models I am running. Is there any way to troubleshoot this issue?
Here is the output of `nvidia-smi`
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.06 Driver Version: 545.29.06 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA T1200 Laptop GPU Off | 00000000:01:00.0 On | N/A |
| N/A 44C P8 6W / 60W | 303MiB / 4096MiB | 7% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 3280 G /usr/libexec/Xorg 115MiB |
| 0 N/A N/A 4776 C+G ...seed-version=20240105-201042.648000 177MiB |
+---------------------------------------------------------------------------------------+
```
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1837/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1831
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1831/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1831/comments
|
https://api.github.com/repos/ollama/ollama/issues/1831/events
|
https://github.com/ollama/ollama/issues/1831
| 2,068,888,882
|
I_kwDOJ0Z1Ps57UL0y
| 1,831
|
Make a second docker image for "NVidia GPUs"
|
{
"login": "frankhaugen",
"id": 13543585,
"node_id": "MDQ6VXNlcjEzNTQzNTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/13543585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankhaugen",
"html_url": "https://github.com/frankhaugen",
"followers_url": "https://api.github.com/users/frankhaugen/followers",
"following_url": "https://api.github.com/users/frankhaugen/following{/other_user}",
"gists_url": "https://api.github.com/users/frankhaugen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankhaugen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankhaugen/subscriptions",
"organizations_url": "https://api.github.com/users/frankhaugen/orgs",
"repos_url": "https://api.github.com/users/frankhaugen/repos",
"events_url": "https://api.github.com/users/frankhaugen/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankhaugen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-01-06T23:30:48
| 2024-01-09T01:07:35
| 2024-01-09T01:07:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I wanted to setup Ollama, (great project, the CPU variant ran out of the box and I had it doing its thing, deserve a 🥇 ), HOWEVER, I have spent 7 hours flailing with docker, (I am on a Windows 11 machine with a 4070 TI card), to get this to work.
I even had chatGPT, Bing Copilot, Rider AI Assistant and Linux Guru friend help me, and no joy, I just can't get docker with the nvidia toolkit and Ollama to run togheter in a container
PLEASE make a "ready to run" docker image that is already 100% ready to go for "Nvidia GPU mode", because I am probably missing something, but either its deprecated dependencies, or something else, and the simple solution here is to have multiple docker images with dedicated "optimizations".
Thank you for an awesome platform for hosting local AI models 😸
Edit: To be clear, we the WSL installation variant worked great
|
{
"login": "frankhaugen",
"id": 13543585,
"node_id": "MDQ6VXNlcjEzNTQzNTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/13543585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankhaugen",
"html_url": "https://github.com/frankhaugen",
"followers_url": "https://api.github.com/users/frankhaugen/followers",
"following_url": "https://api.github.com/users/frankhaugen/following{/other_user}",
"gists_url": "https://api.github.com/users/frankhaugen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankhaugen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankhaugen/subscriptions",
"organizations_url": "https://api.github.com/users/frankhaugen/orgs",
"repos_url": "https://api.github.com/users/frankhaugen/repos",
"events_url": "https://api.github.com/users/frankhaugen/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankhaugen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1831/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1187
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1187/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1187/comments
|
https://api.github.com/repos/ollama/ollama/issues/1187/events
|
https://github.com/ollama/ollama/issues/1187
| 2,000,135,117
|
I_kwDOJ0Z1Ps53N6PN
| 1,187
|
Scaling/Concurrent Requests
|
{
"login": "jjsarf",
"id": 34278274,
"node_id": "MDQ6VXNlcjM0Mjc4Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/34278274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jjsarf",
"html_url": "https://github.com/jjsarf",
"followers_url": "https://api.github.com/users/jjsarf/followers",
"following_url": "https://api.github.com/users/jjsarf/following{/other_user}",
"gists_url": "https://api.github.com/users/jjsarf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jjsarf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jjsarf/subscriptions",
"organizations_url": "https://api.github.com/users/jjsarf/orgs",
"repos_url": "https://api.github.com/users/jjsarf/repos",
"events_url": "https://api.github.com/users/jjsarf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jjsarf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-11-18T01:07:51
| 2024-02-20T01:12:13
| 2024-02-20T01:12:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello again. Great project. This may not be an issue, but I did notice that placing a second request while another one is currently processing makes the new request timeout.
Is this by design? This is not the case when using HuggingFace UI >0.4
Thanks.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1187/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2577
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2577/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2577/comments
|
https://api.github.com/repos/ollama/ollama/issues/2577/events
|
https://github.com/ollama/ollama/issues/2577
| 2,141,052,285
|
I_kwDOJ0Z1Ps5_nd19
| 2,577
|
JSON mode should disallow trailing whitespace
|
{
"login": "joliss",
"id": 524783,
"node_id": "MDQ6VXNlcjUyNDc4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/524783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joliss",
"html_url": "https://github.com/joliss",
"followers_url": "https://api.github.com/users/joliss/followers",
"following_url": "https://api.github.com/users/joliss/following{/other_user}",
"gists_url": "https://api.github.com/users/joliss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joliss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joliss/subscriptions",
"organizations_url": "https://api.github.com/users/joliss/orgs",
"repos_url": "https://api.github.com/users/joliss/repos",
"events_url": "https://api.github.com/users/joliss/events{/privacy}",
"received_events_url": "https://api.github.com/users/joliss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-02-18T15:50:38
| 2024-12-05T00:48:47
| 2024-12-05T00:48:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When using `--format json`, small models sometimes try to add more text after the object, resulting in infinite whitespace.
```
$ ollama run qwen:0.5b-chat-v1.5-q4_K_M 'Reply with only an example JSON object' --format json
{
"name": "John",
"age": 30,
"city": "New York"
}
[whitespace continues indefinitely]
```
I think this could be easily fixed by disallowing trailing whitespace in JSON mode; that is, by forcing the assistant message to terminate immediately after the closing `}` of the top-level JSON object.
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2577/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2577/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7235
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7235/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7235/comments
|
https://api.github.com/repos/ollama/ollama/issues/7235/events
|
https://github.com/ollama/ollama/issues/7235
| 2,593,792,670
|
I_kwDOJ0Z1Ps6amiKe
| 7,235
|
OpenAI AI Compatiable
|
{
"login": "tobegit3hub",
"id": 2715000,
"node_id": "MDQ6VXNlcjI3MTUwMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2715000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tobegit3hub",
"html_url": "https://github.com/tobegit3hub",
"followers_url": "https://api.github.com/users/tobegit3hub/followers",
"following_url": "https://api.github.com/users/tobegit3hub/following{/other_user}",
"gists_url": "https://api.github.com/users/tobegit3hub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tobegit3hub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tobegit3hub/subscriptions",
"organizations_url": "https://api.github.com/users/tobegit3hub/orgs",
"repos_url": "https://api.github.com/users/tobegit3hub/repos",
"events_url": "https://api.github.com/users/tobegit3hub/events{/privacy}",
"received_events_url": "https://api.github.com/users/tobegit3hub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-17T06:49:51
| 2024-11-06T11:14:47
| 2024-11-06T11:14:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7235/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2551
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2551/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2551/comments
|
https://api.github.com/repos/ollama/ollama/issues/2551/events
|
https://github.com/ollama/ollama/issues/2551
| 2,139,329,765
|
I_kwDOJ0Z1Ps5_g5Tl
| 2,551
|
Can we change where the models are stored in windows
|
{
"login": "Gyro0o",
"id": 117460296,
"node_id": "U_kgDOBwBNSA",
"avatar_url": "https://avatars.githubusercontent.com/u/117460296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gyro0o",
"html_url": "https://github.com/Gyro0o",
"followers_url": "https://api.github.com/users/Gyro0o/followers",
"following_url": "https://api.github.com/users/Gyro0o/following{/other_user}",
"gists_url": "https://api.github.com/users/Gyro0o/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gyro0o/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gyro0o/subscriptions",
"organizations_url": "https://api.github.com/users/Gyro0o/orgs",
"repos_url": "https://api.github.com/users/Gyro0o/repos",
"events_url": "https://api.github.com/users/Gyro0o/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gyro0o/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 30
| 2024-02-16T19:52:24
| 2025-01-27T15:20:38
| 2024-02-17T00:15:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As far as I know the models are automatically downloaded to C:/Users/username/.ollama
But can we change the directory to another one due to storage issues?
|
{
"login": "Gyro0o",
"id": 117460296,
"node_id": "U_kgDOBwBNSA",
"avatar_url": "https://avatars.githubusercontent.com/u/117460296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gyro0o",
"html_url": "https://github.com/Gyro0o",
"followers_url": "https://api.github.com/users/Gyro0o/followers",
"following_url": "https://api.github.com/users/Gyro0o/following{/other_user}",
"gists_url": "https://api.github.com/users/Gyro0o/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gyro0o/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gyro0o/subscriptions",
"organizations_url": "https://api.github.com/users/Gyro0o/orgs",
"repos_url": "https://api.github.com/users/Gyro0o/repos",
"events_url": "https://api.github.com/users/Gyro0o/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gyro0o/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2551/reactions",
"total_count": 32,
"+1": 26,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 6,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2551/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1616
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1616/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1616/comments
|
https://api.github.com/repos/ollama/ollama/issues/1616/events
|
https://github.com/ollama/ollama/issues/1616
| 2,049,473,064
|
I_kwDOJ0Z1Ps56KHoo
| 1,616
|
How to skip animation?
|
{
"login": "kokizzu",
"id": 1061610,
"node_id": "MDQ6VXNlcjEwNjE2MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1061610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kokizzu",
"html_url": "https://github.com/kokizzu",
"followers_url": "https://api.github.com/users/kokizzu/followers",
"following_url": "https://api.github.com/users/kokizzu/following{/other_user}",
"gists_url": "https://api.github.com/users/kokizzu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kokizzu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kokizzu/subscriptions",
"organizations_url": "https://api.github.com/users/kokizzu/orgs",
"repos_url": "https://api.github.com/users/kokizzu/repos",
"events_url": "https://api.github.com/users/kokizzu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kokizzu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-19T21:37:09
| 2023-12-20T23:48:45
| 2023-12-20T23:48:45
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
for example when i run:
```
ollama run mistral
>>> some prompt
... very slow letter by letter output ... <-- how to make this faster?
```
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1616/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.