url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/2314
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2314/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2314/comments
|
https://api.github.com/repos/ollama/ollama/issues/2314/events
|
https://github.com/ollama/ollama/issues/2314
| 2,113,544,278
|
I_kwDOJ0Z1Ps59-iBW
| 2,314
|
add official Java SDK
|
{
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/followers",
"following_url": "https://api.github.com/users/adriens/following{/other_user}",
"gists_url": "https://api.github.com/users/adriens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adriens/subscriptions",
"organizations_url": "https://api.github.com/users/adriens/orgs",
"repos_url": "https://api.github.com/users/adriens/repos",
"events_url": "https://api.github.com/users/adriens/events{/privacy}",
"received_events_url": "https://api.github.com/users/adriens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-02-01T21:47:04
| 2024-03-11T23:28:31
| 2024-03-11T23:28:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# :information_source: Context
A few months ago, `langchain` got its sdk ported to java through [`langchain4j`](https://github.com/langchain4j/langchain4j). By doing this, its opened a lot of entreprise ready stuff and innovations on the java stack, then on native one... then on k8s, even for java developers.
For example, my team uses [`quarkus`](https://endoflife.date/quarkus-framework) and I'm about sure, getting `ollama` as a java sdk could help people amazing things on top of ``ollama.

Below some examples:
- [:memo: Quarkus Langchain4j extension in Quarkiverse](https://docs.quarkiverse.io/quarkus-langchain4j/dev/index.html)
- [:cinema: Fireside Chat: Langchain4j & Quarkus](https://www.youtube.com/live/mYw9ySwmK34?si=dRe54Dc6ZR316RoA)
- [:octocat: Quarkus Langchain4j extension ](https://github.com/quarkiverse/quarkus-langchain4j)
- [:bird: Worthwile tweet](https://twitter.com/sebi2706/status/1753037267063513555)

# :dart: Feature request : `ollama-java`
- [ ] Port `ollama` sdk to java
- [ ] Deliver it as a Quarkus extension on [Quarkiverse](https://hub.quarkiverse.io/)
# :tickets: Related issues
- https://github.com/ollama/ollama/issues/1322
# :moneybag: Benefits
- Welcome `ollama` to [graalVM](https://www.graalvm.org/) ecosystem
- Implement [SmallRye Health](https://quarkus.io/guides/smallrye-health)
- Build apps/stack around `ollama` (to implement #2301 by queuing tasks )
- Open `olama` to java people (& enterprises stack)
- Blazing fast performances (see [📑 Quarkus : the fast, eco & DevEx-friendly framework](https://dev.to/adriens/quarkus-the-fast-eco-devex-friendly-framework-i0k) for more)
- Build faster thanks to java stack (examples below):
- [Quarkus Extension for Apache Kafka](https://quarkus.io/guides/kafka)
- [RabbitMQ Client](https://quarkus.io/extensions/io.quarkiverse.rabbitmqclient/quarkus-rabbitmq-client/)
- [Neo4j client](https://quarkus.io/extensions/io.quarkiverse.neo4j/quarkus-neo4j/)
- [Apache Kafka Client](https://quarkus.io/extensions/io.quarkus/quarkus-kafka-client/)
- [gRPC](https://quarkus.io/extensions/io.quarkus/quarkus-grpc/)
- [MongoDB client](https://quarkus.io/extensions/io.quarkus/quarkus-mongodb-client/)
- [Redis Client](https://quarkus.io/extensions/io.quarkus/quarkus-redis-client/)
- [Apache Camel](https://quarkus.io/guides/camel)
|
{
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/followers",
"following_url": "https://api.github.com/users/adriens/following{/other_user}",
"gists_url": "https://api.github.com/users/adriens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adriens/subscriptions",
"organizations_url": "https://api.github.com/users/adriens/orgs",
"repos_url": "https://api.github.com/users/adriens/repos",
"events_url": "https://api.github.com/users/adriens/events{/privacy}",
"received_events_url": "https://api.github.com/users/adriens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2314/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7354
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7354/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7354/comments
|
https://api.github.com/repos/ollama/ollama/issues/7354/events
|
https://github.com/ollama/ollama/issues/7354
| 2,613,657,050
|
I_kwDOJ0Z1Ps6byT3a
| 7,354
|
CMD ["serve"] is wrong in Dockerfile?
|
{
"login": "YinYongHongYork",
"id": 114273033,
"node_id": "U_kgDOBs-rCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/114273033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YinYongHongYork",
"html_url": "https://github.com/YinYongHongYork",
"followers_url": "https://api.github.com/users/YinYongHongYork/followers",
"following_url": "https://api.github.com/users/YinYongHongYork/following{/other_user}",
"gists_url": "https://api.github.com/users/YinYongHongYork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YinYongHongYork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YinYongHongYork/subscriptions",
"organizations_url": "https://api.github.com/users/YinYongHongYork/orgs",
"repos_url": "https://api.github.com/users/YinYongHongYork/repos",
"events_url": "https://api.github.com/users/YinYongHongYork/events{/privacy}",
"received_events_url": "https://api.github.com/users/YinYongHongYork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-25T10:01:50
| 2024-10-29T17:41:47
| 2024-10-29T17:41:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
CMD ["serve"]
### OS
Docker
### GPU
Other
### CPU
_No response_
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7354/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2707
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2707/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2707/comments
|
https://api.github.com/repos/ollama/ollama/issues/2707/events
|
https://github.com/ollama/ollama/issues/2707
| 2,150,788,939
|
I_kwDOJ0Z1Ps6AMm9L
| 2,707
|
Ollama hangs when used with flowise
|
{
"login": "wurstache",
"id": 3034620,
"node_id": "MDQ6VXNlcjMwMzQ2MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3034620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wurstache",
"html_url": "https://github.com/wurstache",
"followers_url": "https://api.github.com/users/wurstache/followers",
"following_url": "https://api.github.com/users/wurstache/following{/other_user}",
"gists_url": "https://api.github.com/users/wurstache/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wurstache/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wurstache/subscriptions",
"organizations_url": "https://api.github.com/users/wurstache/orgs",
"repos_url": "https://api.github.com/users/wurstache/repos",
"events_url": "https://api.github.com/users/wurstache/events{/privacy}",
"received_events_url": "https://api.github.com/users/wurstache/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2024-02-23T10:22:03
| 2024-07-23T23:34:33
| 2024-03-13T03:39:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I'm testing embeddings with ollama and flowise. All works fine with open AI and some times it works with ollama. It would take a lot of time to run with ollama so I used a 1 page PDF and went to bed.
This morning I got the embedding but ollama is still running like hell as if it replays the last API request over and over.
I don't understand the code, I'm trying but I'm blocked.
Does anyone have the same issue ?
Can anyone have a look ?
$top :
top - 11:16:36 up 14:45, 2 users, load average: 3.39, 3.37, 3.43
Tasks: 117 total, 1 running, 116 sleeping, 0 stopped, 0 zombie
%Cpu(s): 99.0 us, 0.8 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.2 st
MiB Mem : 5941.2 total, 123.3 free, 5293.6 used, 524.4 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 288.9 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
33699 ollama 20 0 6998088 4.4g 2936 S 399.3 76.2 40:05.28 ollama
tail systemclt :
Feb 23 11:06:09 xxx ollama[33699]: .......................................................
Feb 23 11:06:09 xxx ollama[33699]: llama_new_context_with_model: n_ctx = 2048
Feb 23 11:06:09 xxx ollama[33699]: llama_new_context_with_model: freq_base = 1000.0
Feb 23 11:06:09 xxx ollama[33699]: llama_new_context_with_model: freq_scale = 1
Feb 23 11:06:09 xxx ollama[33699]: llama_kv_cache_init: CPU KV buffer size = 72.00 MiB
Feb 23 11:06:09 xxx ollama[33699]: llama_new_context_with_model: KV self size = 72.00 MiB, K (f16): 36.00 MiB, >
Feb 23 11:06:09 xxx ollama[33699]: llama_new_context_with_model: CPU input buffer size = 6.52 MiB
Feb 23 11:06:09 xxx ollama[33699]: llama_new_context_with_model: CPU compute buffer size = 57.00 MiB
Feb 23 11:06:09 xxx ollama[33699]: llama_new_context_with_model: graph splits (measure): 1
Feb 23 11:06:09 xxx ollama[33699]: time=2024-02-23T11:06:09.737+01:00 level=INFO source=dyn_ext_server.go:161 msg="S>
Feb 23 11:06:10 xxx ollama[33699]: [GIN] 2024/02/23 - 11:06:10 | 404 | 3.35µs | 127.0.0.1 | POST "/>
Feb 23 11:06:12 xxx ollama[33699]: [GIN] 2024/02/23 - 11:06:12 | 200 | 1m23s | 127.0.0.1 | POST "/>
Feb 23 11:06:12 xxx ollama[33699]: [GIN] 2024/02/23 - 11:06:12 | 200 | 1m23s | 127.0.0.1 | POST "/>
Feb 23 11:06:12 xxx ollama[33699]: time=2024-02-23T11:06:12.701+01:00 level=INFO source=routes.go:78 msg="changing l>
Feb 23 11:06:12 xxx ollama[33699]: time=2024-02-23T11:06:12.877+01:00 level=WARN source=llm.go:39 msg="requested con>
Feb 23 11:06:12 xxx ollama[33699]: time=2024-02-23T11:06:12.877+01:00 level=INFO source=cpu_common.go:11 msg="CPU ha>
Feb 23 11:06:12 xxx ollama[33699]: time=2024-02-23T11:06:12.877+01:00 level=INFO source=cpu_common.go:11 msg="CPU ha>
Feb 23 11:06:12 xxx ollama[33699]: time=2024-02-23T11:06:12.877+01:00 level=INFO source=llm.go:77 msg="GPU not avail>
Feb 23 11:06:12 xxx ollama[33699]: time=2024-02-23T11:06:12.877+01:00 level=INFO source=dyn_ext_server.go:90 msg="Lo>
Feb 23 11:06:12 xxx ollama[33699]: time=2024-02-23T11:06:12.877+01:00 level=INFO source=dyn_ext_server.go:150 msg="I>
Feb 23 11:06:12 xxx ollama[33699]: llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from>
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2707/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3877
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3877/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3877/comments
|
https://api.github.com/repos/ollama/ollama/issues/3877/events
|
https://github.com/ollama/ollama/issues/3877
| 2,261,430,257
|
I_kwDOJ0Z1Ps6Gyq_x
| 3,877
|
Support for Fedora 40 with rocm
|
{
"login": "oatmealm",
"id": 68159077,
"node_id": "MDQ6VXNlcjY4MTU5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/68159077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oatmealm",
"html_url": "https://github.com/oatmealm",
"followers_url": "https://api.github.com/users/oatmealm/followers",
"following_url": "https://api.github.com/users/oatmealm/following{/other_user}",
"gists_url": "https://api.github.com/users/oatmealm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oatmealm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oatmealm/subscriptions",
"organizations_url": "https://api.github.com/users/oatmealm/orgs",
"repos_url": "https://api.github.com/users/oatmealm/repos",
"events_url": "https://api.github.com/users/oatmealm/events{/privacy}",
"received_events_url": "https://api.github.com/users/oatmealm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-04-24T14:22:52
| 2024-10-17T15:31:01
| 2024-05-06T21:33:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Since F40 has rocm6 now, it'd be useful if it can be picked up. I have this to block the installer from downloading the libararies:
```
lrwxrwxrwx. 1 root root 28 Apr 24 16:08 libamd_comgr.so.2 -> /usr/lib64/libamd_comgr.so.2
lrwxrwxrwx. 1 root root 27 Apr 24 16:08 libamdhip64.so.6 -> /usr/lib64/libamdhip64.so.6
lrwxrwxrwx. 1 root root 29 Apr 24 16:08 libdrm_amdgpu.so.1 -> /usr/lib64/libdrm_amdgpu.so.1
lrwxrwxrwx. 1 root root 26 Apr 24 16:09 libhipblas.so.2 -> /usr/lib64/libhipblas.so.2
lrwxrwxrwx. 1 root root 32 Apr 24 16:09 libhsa-runtime64.so.1 -> /usr/lib64/libhsa-runtime64.so.1
lrwxrwxrwx. 1 root root 26 Apr 24 16:09 librocblas.so.4 -> /usr/lib64/librocblas.so.4
lrwxrwxrwx. 1 root root 28 Apr 24 16:09 librocsolver.so.0 -> /usr/lib64/librocsolver.so.0
lrwxrwxrwx. 1 root root 28 Apr 24 16:10 librocsparse.so.1 -> /usr/lib64/librocsparse.so.1
lrwxrwxrwx. 1 root root 24 Apr 24 16:10 libtinfo.so.6 -> /usr/lib64/libtinfo.so.6
lrwxrwxrwx. 1 root root 19 Apr 24 16:13 rocblas -> /usr/lib64/rocblas/
```
Though I do see problems with support for `gfx900` which I can't avoid with `HSA_OVERRIDE_GFX_VERSION=9.0.0`...
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3877/reactions",
"total_count": 6,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/3877/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4203
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4203/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4203/comments
|
https://api.github.com/repos/ollama/ollama/issues/4203/events
|
https://github.com/ollama/ollama/issues/4203
| 2,281,176,632
|
I_kwDOJ0Z1Ps6H9_44
| 4,203
|
panic: runtime error: invalid memory address or nil pointer dereference
|
{
"login": "inoue0426",
"id": 8393063,
"node_id": "MDQ6VXNlcjgzOTMwNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8393063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inoue0426",
"html_url": "https://github.com/inoue0426",
"followers_url": "https://api.github.com/users/inoue0426/followers",
"following_url": "https://api.github.com/users/inoue0426/following{/other_user}",
"gists_url": "https://api.github.com/users/inoue0426/gists{/gist_id}",
"starred_url": "https://api.github.com/users/inoue0426/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/inoue0426/subscriptions",
"organizations_url": "https://api.github.com/users/inoue0426/orgs",
"repos_url": "https://api.github.com/users/inoue0426/repos",
"events_url": "https://api.github.com/users/inoue0426/events{/privacy}",
"received_events_url": "https://api.github.com/users/inoue0426/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-05-06T15:38:11
| 2024-05-06T23:16:59
| 2024-05-06T23:16:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I got below issue when we run the model on M2 Mac Studio with running 800K times.
```
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":1054116,"tid":"0x1fbd37ac0","timestamp":1714759585}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1816,"msg":"slot progression","n_past":43,"n_past_se":0,"n_prompt_tokens_processed":45,"slot_id":0,"task_id":1054116,"tid":"0x1fbd37ac0","timestamp":1714759585}
{"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":43,"slot_id":0,"task_id":1054116,"tid":"0x1fbd37ac0","timestamp":1714759585}
time=2024-05-03T14:06:25.278-04:00 level=ERROR source=server.go:559 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/05/03 - 14:06:25 | 200 | 2.48244975s | 127.0.0.1 | POST "/api/generate"
time=2024-05-03T14:06:25.296-04:00 level=ERROR source=server.go:559 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/05/03 - 14:06:25 | 200 | 365.046333ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/05/03 - 14:06:25 | 200 | 15.209µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/03 - 14:06:25 | 200 | 400µs | 127.0.0.1 | POST "/api/show"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x102ab4a90]
goroutine 13 [running]:
github.com/ollama/ollama/server.(*runnerRef).needsReload(0x14000470b40, {0x103099320, 0x1400004ebe0}, 0x140005fe0f0)
github.c:442 +0x150
github.com/ollama/ollama/server.(*Scheduler).processPending(0x1400004ed20, {0x103099320, 0x1400004ebe0})
github.com/ollama/ollama/server/sched.go:130 +0x390
github.com/ollama/ollama/server.(*Scheduler).Run.func1()
github.com/ollama/ollama/server/sched.go:107 +0x28
created by github.com/ollama/ollama/server.(*Scheduler).Run in goroutine 1
github.com/ollama/ollama/server/sched.go:106 +0xc4
```
# Environment
- Mac Studio 2022
- Sonoma 14.4.1
- ./ollama-darwin -v ```0.1.33-rc5```
# code
- Ollama server ```./ollama-darwin run llama3:8b-instruct-q5_K_M```
- Python script
```Python
def run_test(t):
subprocess.run(['./test.sh', str(t)])
Parallel(n_jobs=-1)(delayed(run_test)(t) for t in tqdm(remains))
```
- Shell script
```Shell
#!/bin/zsh
output=$(./ollama-darwin run llama3:8b-instruct-q5_K_M "does the INPUT TEXT related to biomedicine? answer Y for Yes, N for No. only output 1 character. do not provide explanation. do not include markdown formatting. INPUT TEXT: "${1}" ")
echo "\"${1}\",\"${output}\"" >> res.csv%
```
# Python Script log
```
76%|█████████████████████████ | 447280/589304 [45:06:42<14:16:23, 2.76it/s] 77%|█████████████████████████▎ | 450940/589304 [45:28:16<13:18:45, 2.89it/s]Error: Post "http://127.0.0.1:11434/api/generate": EOF
Error: Post "http://127.0.0.1:11434/api/generate": EOF
Error: Post "http://127.0.0.1:11434/api/generate": EOF
Error: Post "http://127.0.0.1:11434/api/generate": EOF
Error: Post "http://127.0.0.1:11434/api/generate": EOF
Error: Post "http://127.0.0.1:11434/api/generate": EOF
Error: could not connect to ollama app, is it running?
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py", line 436, in _process_worker
r = call_item()
File "/opt/anaconda3/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py", line 288, in __call__
return self.fn(*self.args, **self.kwargs)
File "/opt/anaconda3/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 595, in __call__
return self.func(*args, **kwargs)
File "/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py", line 262, in __call__
return [func(*args, **kwargs)
File "/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py", line 262, in <listcomp>
return [func(*args, **kwargs)
File "/Users/inouey2/code/review/llama3-server.py", line 37, in run_test
subprocess.run(['./test.sh', str(t)])
File "/opt/anaconda3/lib/python3.9/subprocess.py", line 505, in run
with Popen(*popenargs, **kwargs) as process:
File "/opt/anaconda3/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/anaconda3/lib/python3.9/subprocess.py", line 1754, in _execute_child
self.pid = _posixsubprocess.fork_exec(
ValueError: embedded null byte
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/inouey2/code/review/llama3-server.py", line 39, in <module>
Error: could not connect to ollama app, is it running?
Parallel(n_jobs=-1)(delayed(run_test)(t) for t in tqdm(remains))
File "/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py", line 1056, in __call__
self.retrieve()
File "/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py", line 935, in retrieve
Error: could not connect to ollama app, is it running?
self._output.extend(job.get(timeout=self.timeout))
File "/opt/anaconda3/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 542, in wrap_future_result
return future.result(timeout=timeout)
File "/opt/anaconda3/lib/python3.9/concurrent/futures/_base.py", line 438, in result
return self.__get_result()
File "/opt/anaconda3/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
ValueError: embedded null byte
77%|█████████████████████████▎ | 452309/589304 [45:36:11<13:48:44, 2.76it/s]
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.33-rc5
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4203/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/358
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/358/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/358/comments
|
https://api.github.com/repos/ollama/ollama/issues/358/events
|
https://github.com/ollama/ollama/issues/358
| 1,852,836,662
|
I_kwDOJ0Z1Ps5ucAs2
| 358
|
Parallel requests
|
{
"login": "youssef02",
"id": 26381154,
"node_id": "MDQ6VXNlcjI2MzgxMTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/26381154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/youssef02",
"html_url": "https://github.com/youssef02",
"followers_url": "https://api.github.com/users/youssef02/followers",
"following_url": "https://api.github.com/users/youssef02/following{/other_user}",
"gists_url": "https://api.github.com/users/youssef02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/youssef02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/youssef02/subscriptions",
"organizations_url": "https://api.github.com/users/youssef02/orgs",
"repos_url": "https://api.github.com/users/youssef02/repos",
"events_url": "https://api.github.com/users/youssef02/events{/privacy}",
"received_events_url": "https://api.github.com/users/youssef02/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 82
| 2023-08-16T09:15:52
| 2025-01-29T01:48:45
| 2024-05-02T20:17:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
the app is amazing but the problem is If I want to create a multiagent from one api, I need to create a Queue system, as it can reply only to one request at a time, is there a way to improve this or do I have to implement a Queue system?
I just started here so sorry for any mistake, ;)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/358/reactions",
"total_count": 56,
"+1": 47,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 3,
"rocket": 2,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/358/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8072
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8072/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8072/comments
|
https://api.github.com/repos/ollama/ollama/issues/8072/events
|
https://github.com/ollama/ollama/pull/8072
| 2,736,352,448
|
PR_kwDOJ0Z1Ps6FCuxX
| 8,072
|
Support Unix sockets for OLLAMA_HOST
|
{
"login": "doringeman",
"id": 33001019,
"node_id": "MDQ6VXNlcjMzMDAxMDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/33001019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doringeman",
"html_url": "https://github.com/doringeman",
"followers_url": "https://api.github.com/users/doringeman/followers",
"following_url": "https://api.github.com/users/doringeman/following{/other_user}",
"gists_url": "https://api.github.com/users/doringeman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doringeman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doringeman/subscriptions",
"organizations_url": "https://api.github.com/users/doringeman/orgs",
"repos_url": "https://api.github.com/users/doringeman/repos",
"events_url": "https://api.github.com/users/doringeman/events{/privacy}",
"received_events_url": "https://api.github.com/users/doringeman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2024-12-12T16:22:02
| 2025-01-13T19:02:01
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8072",
"html_url": "https://github.com/ollama/ollama/pull/8072",
"diff_url": "https://github.com/ollama/ollama/pull/8072.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8072.patch",
"merged_at": null
}
|
Add support for Unix sockets for OLLAMA_HOST.
```
make -j8 && OLLAMA_HOST=unix:///tmp/ollama.sock ./ollama serve
OLLAMA_HOST=unix:///tmp/ollama.sock ./ollama list
curl --unix-socket /tmp/ollama.sock localhost/api/tags
```
Implements https://github.com/ollama/ollama/issues/739.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8072/reactions",
"total_count": 15,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 4,
"eyes": 3
}
|
https://api.github.com/repos/ollama/ollama/issues/8072/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5440
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5440/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5440/comments
|
https://api.github.com/repos/ollama/ollama/issues/5440/events
|
https://github.com/ollama/ollama/pull/5440
| 2,386,881,258
|
PR_kwDOJ0Z1Ps50O6ny
| 5,440
|
update named templates
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-02T18:23:38
| 2024-07-09T16:36:34
| 2024-07-09T16:36:32
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5440",
"html_url": "https://github.com/ollama/ollama/pull/5440",
"diff_url": "https://github.com/ollama/ollama/pull/5440.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5440.patch",
"merged_at": "2024-07-09T16:36:32"
}
|
update named templates with messages falling back to previous template if messages isn't available for compat
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5440/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5695
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5695/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5695/comments
|
https://api.github.com/repos/ollama/ollama/issues/5695/events
|
https://github.com/ollama/ollama/pull/5695
| 2,407,904,285
|
PR_kwDOJ0Z1Ps51V5Ol
| 5,695
|
server: lowercase roles for compatibility with clients
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-15T05:44:15
| 2024-07-15T20:55:59
| 2024-07-15T20:55:57
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5695",
"html_url": "https://github.com/ollama/ollama/pull/5695",
"diff_url": "https://github.com/ollama/ollama/pull/5695.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5695.patch",
"merged_at": "2024-07-15T20:55:57"
}
|
Fixes https://github.com/ollama/ollama/issues/5687. Note: we may first want to make sure that there are no cases where roles may be uppercase.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5695/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5011
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5011/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5011/comments
|
https://api.github.com/repos/ollama/ollama/issues/5011/events
|
https://github.com/ollama/ollama/issues/5011
| 2,349,971,422
|
I_kwDOJ0Z1Ps6MEbfe
| 5,011
|
Qwen2 Model
|
{
"login": "kindzhon",
"id": 44000205,
"node_id": "MDQ6VXNlcjQ0MDAwMjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/44000205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kindzhon",
"html_url": "https://github.com/kindzhon",
"followers_url": "https://api.github.com/users/kindzhon/followers",
"following_url": "https://api.github.com/users/kindzhon/following{/other_user}",
"gists_url": "https://api.github.com/users/kindzhon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kindzhon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kindzhon/subscriptions",
"organizations_url": "https://api.github.com/users/kindzhon/orgs",
"repos_url": "https://api.github.com/users/kindzhon/repos",
"events_url": "https://api.github.com/users/kindzhon/events{/privacy}",
"received_events_url": "https://api.github.com/users/kindzhon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-13T01:54:18
| 2024-06-13T18:41:06
| 2024-06-13T18:41:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Qwen2 is not in Model list.
Why not auto add Model list? Open ui has no this issues.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5011/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3840
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3840/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3840/comments
|
https://api.github.com/repos/ollama/ollama/issues/3840/events
|
https://github.com/ollama/ollama/issues/3840
| 2,258,270,176
|
I_kwDOJ0Z1Ps6Gmnfg
| 3,840
|
Vega 56 (gfx900) fails to load model - hipMemGetInfo - error: invalid argument
|
{
"login": "quwassar",
"id": 70705054,
"node_id": "MDQ6VXNlcjcwNzA1MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/70705054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quwassar",
"html_url": "https://github.com/quwassar",
"followers_url": "https://api.github.com/users/quwassar/followers",
"following_url": "https://api.github.com/users/quwassar/following{/other_user}",
"gists_url": "https://api.github.com/users/quwassar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quwassar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quwassar/subscriptions",
"organizations_url": "https://api.github.com/users/quwassar/orgs",
"repos_url": "https://api.github.com/users/quwassar/repos",
"events_url": "https://api.github.com/users/quwassar/events{/privacy}",
"received_events_url": "https://api.github.com/users/quwassar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 13
| 2024-04-23T08:28:38
| 2024-06-24T19:22:30
| 2024-06-21T23:56:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi! I have trouble with used video card Vega 56 AMD:
```
Apr 23 08:24:36 chat-server ollama[95121]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Apr 23 08:24:36 chat-server ollama[95121]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
Apr 23 08:24:36 chat-server ollama[95121]: ggml_cuda_init: found 1 ROCm devices:
Apr 23 08:24:36 chat-server ollama[95121]: Device 0: Radeon RX Vega, compute capability 9.0, VMM: no
Apr 23 08:24:36 chat-server ollama[95121]: CUDA error: invalid argument
Apr 23 08:24:36 chat-server ollama[95121]: current device: 0, in function ggml_backend_cuda_get_device_memory at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:2612
Apr 23 08:24:36 chat-server ollama[95121]: hipMemGetInfo(free, total)
Apr 23 08:24:36 chat-server ollama[95121]: GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"
Apr 23 08:24:36 chat-server ollama[95179]: [New LWP 95176]
Apr 23 08:24:37 chat-server ollama[95179]: [Thread debugging using libthread_db enabled]
Apr 23 08:24:37 chat-server ollama[95179]: Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Apr 23 08:24:37 chat-server ollama[95179]: 0x00007f4968df142f in __GI___wait4 (pid=95179, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
Apr 23 08:24:37 chat-server ollama[95121]: 30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
Apr 23 08:24:37 chat-server ollama[95179]: #0 0x00007f4968df142f in __GI___wait4 (pid=95179, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
Apr 23 08:24:37 chat-server ollama[95179]: 30 in ../sysdeps/unix/sysv/linux/wait4.c
Apr 23 08:24:37 chat-server ollama[95179]: #1 0x00000000024e8084 in ggml_cuda_error(char const*, char const*, char const*, int, char const*) ()
Apr 23 08:24:37 chat-server ollama[95179]: #2 0x00000000024e900f in ggml_backend_cuda_get_device_memory ()
Apr 23 08:24:37 chat-server ollama[95179]: #3 0x00000000023dc720 in llm_load_tensors(llama_model_loader&, llama_model&, int, llama_split_mode, int, float const*, bool, bool (*)(float, void*), void*) ()
Apr 23 08:24:37 chat-server ollama[95179]: #4 0x00000000023bf6e6 in llama_model_load(std::string const&, llama_model&, llama_model_params&) ()
Apr 23 08:24:37 chat-server ollama[95179]: #5 0x00000000023bd47f in llama_load_model_from_file ()
Apr 23 08:24:37 chat-server ollama[95179]: #6 0x0000000002378472 in llama_init_from_gpt_params(gpt_params&) ()
Apr 23 08:24:37 chat-server ollama[95179]: #7 0x00000000022d8754 in llama_server_context::load_model(gpt_params const&) ()
Apr 23 08:24:37 chat-server ollama[95179]: #8 0x00000000022c4381 in main ()
Apr 23 08:24:37 chat-server ollama[95179]: [Inferior 1 (process 95175) detached]
Apr 23 08:24:37 chat-server ollama[95121]: time=2024-04-23T08:24:37.936Z level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 CUDA error: invalid argument\n current device: 0, in function ggml_backend_cuda_get_device_memory at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:2612\n hipMemGetInfo(free, total)\nGGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !\"CUDA error\""
Apr 23 08:24:37 chat-server ollama[95121]: time=2024-04-23T08:24:37.936Z level=DEBUG source=server.go:832 msg="stopping llama server"
```
Can you help me?
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3840/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3840/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4653
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4653/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4653/comments
|
https://api.github.com/repos/ollama/ollama/issues/4653/events
|
https://github.com/ollama/ollama/pull/4653
| 2,317,977,160
|
PR_kwDOJ0Z1Ps5wmeaa
| 4,653
|
Improve install script output on WSL2 and Linux
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-26T23:14:54
| 2024-05-28T21:41:51
| 2024-05-28T21:41:50
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4653",
"html_url": "https://github.com/ollama/ollama/pull/4653",
"diff_url": "https://github.com/ollama/ollama/pull/4653.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4653.patch",
"merged_at": "2024-05-28T21:41:50"
}
|
Fixes:
* Install success not showing on Linux systems with NVIDIA GPUs
* WSL2 showing no GPU detected when the NVIDIA GPU passthrough is available
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4653/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1426
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1426/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1426/comments
|
https://api.github.com/repos/ollama/ollama/issues/1426/events
|
https://github.com/ollama/ollama/pull/1426
| 2,031,745,478
|
PR_kwDOJ0Z1Ps5heb_M
| 1,426
|
fix: only flush template in chat when current role encountered
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-08T00:51:04
| 2023-12-08T21:44:25
| 2023-12-08T21:44:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1426",
"html_url": "https://github.com/ollama/ollama/pull/1426",
"diff_url": "https://github.com/ollama/ollama/pull/1426.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1426.patch",
"merged_at": "2023-12-08T21:44:24"
}
|
There was a bug in the /chat endpoint here during templating that resulted in the prompt template being written incorrectly.
If a `user` was encountered when the `system` was already set the template would be written before the `user` content was set. This was not correct. The template should only be written when the exact role has been encountered before.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1426/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1029
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1029/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1029/comments
|
https://api.github.com/repos/ollama/ollama/issues/1029/events
|
https://github.com/ollama/ollama/pull/1029
| 1,981,069,674
|
PR_kwDOJ0Z1Ps5eyo2R
| 1,029
|
Added logseq ollama plugin
|
{
"login": "omagdy7",
"id": 99906646,
"node_id": "U_kgDOBfR0Vg",
"avatar_url": "https://avatars.githubusercontent.com/u/99906646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omagdy7",
"html_url": "https://github.com/omagdy7",
"followers_url": "https://api.github.com/users/omagdy7/followers",
"following_url": "https://api.github.com/users/omagdy7/following{/other_user}",
"gists_url": "https://api.github.com/users/omagdy7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omagdy7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omagdy7/subscriptions",
"organizations_url": "https://api.github.com/users/omagdy7/orgs",
"repos_url": "https://api.github.com/users/omagdy7/repos",
"events_url": "https://api.github.com/users/omagdy7/events{/privacy}",
"received_events_url": "https://api.github.com/users/omagdy7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-07T10:49:05
| 2023-11-07T17:58:13
| 2023-11-07T17:58:13
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1029",
"html_url": "https://github.com/ollama/ollama/pull/1029",
"diff_url": "https://github.com/ollama/ollama/pull/1029.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1029.patch",
"merged_at": "2023-11-07T17:58:13"
}
|
Adds a plugin I made to integrate ollama with [logseq](https://github.com/logseq/logseq)
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1029/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/244
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/244/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/244/comments
|
https://api.github.com/repos/ollama/ollama/issues/244/events
|
https://github.com/ollama/ollama/pull/244
| 1,829,534,136
|
PR_kwDOJ0Z1Ps5W0Wmz
| 244
|
tell users to check the server error logs
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-31T15:50:37
| 2023-08-02T21:08:12
| 2023-08-02T21:08:11
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/244",
"html_url": "https://github.com/ollama/ollama/pull/244",
"diff_url": "https://github.com/ollama/ollama/pull/244.diff",
"patch_url": "https://github.com/ollama/ollama/pull/244.patch",
"merged_at": "2023-08-02T21:08:11"
}
|
when possible tell users to check the error logs to get more info on why their command failed
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/244/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6483
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6483/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6483/comments
|
https://api.github.com/repos/ollama/ollama/issues/6483/events
|
https://github.com/ollama/ollama/pull/6483
| 2,483,895,650
|
PR_kwDOJ0Z1Ps55SYRI
| 6,483
|
gpu: Group GPU Library sets by variant
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-23T21:06:35
| 2024-08-23T22:12:05
| 2024-08-23T22:11:56
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6483",
"html_url": "https://github.com/ollama/ollama/pull/6483",
"diff_url": "https://github.com/ollama/ollama/pull/6483.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6483.patch",
"merged_at": "2024-08-23T22:11:56"
}
|
The recent cuda variant changes uncovered a bug in ByLibrary which failed to group by common variant for GPU types.
New unit test fails without the 1 line fix, and passes with.
Fixes #6479
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6483/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8565
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8565/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8565/comments
|
https://api.github.com/repos/ollama/ollama/issues/8565/events
|
https://github.com/ollama/ollama/issues/8565
| 2,809,182,773
|
I_kwDOJ0Z1Ps6ncLo1
| 8,565
|
Error when trying to download deepseek-r1:7b
|
{
"login": "makhlwf",
"id": 78276231,
"node_id": "MDQ6VXNlcjc4Mjc2MjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/78276231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makhlwf",
"html_url": "https://github.com/makhlwf",
"followers_url": "https://api.github.com/users/makhlwf/followers",
"following_url": "https://api.github.com/users/makhlwf/following{/other_user}",
"gists_url": "https://api.github.com/users/makhlwf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makhlwf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makhlwf/subscriptions",
"organizations_url": "https://api.github.com/users/makhlwf/orgs",
"repos_url": "https://api.github.com/users/makhlwf/repos",
"events_url": "https://api.github.com/users/makhlwf/events{/privacy}",
"received_events_url": "https://api.github.com/users/makhlwf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 11
| 2025-01-24T11:14:17
| 2025-01-29T14:53:46
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I tried using ollama run deepseek-r1:7b
It started to download for a minute then this error appeared
Error: Post "http://127.0.0.1:11434/api/show": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8565/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8565/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/890
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/890/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/890/comments
|
https://api.github.com/repos/ollama/ollama/issues/890/events
|
https://github.com/ollama/ollama/pull/890
| 1,959,595,439
|
PR_kwDOJ0Z1Ps5dqKhk
| 890
|
document linux install inline with the script file
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-10-24T16:03:12
| 2023-10-25T14:58:18
| 2023-10-25T14:58:17
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/890",
"html_url": "https://github.com/ollama/ollama/pull/890",
"diff_url": "https://github.com/ollama/ollama/pull/890.diff",
"patch_url": "https://github.com/ollama/ollama/pull/890.patch",
"merged_at": "2023-10-25T14:58:17"
}
|
Shell scripts are dense and hard to read. Document explicitly what the installation script is doing so that enquiring users can see exactly what changes are being made to their system.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/890/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5508
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5508/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5508/comments
|
https://api.github.com/repos/ollama/ollama/issues/5508/events
|
https://github.com/ollama/ollama/issues/5508
| 2,393,209,058
|
I_kwDOJ0Z1Ps6OpXji
| 5,508
|
Ollama running 2 instances
|
{
"login": "electro199",
"id": 109358640,
"node_id": "U_kgDOBoSuMA",
"avatar_url": "https://avatars.githubusercontent.com/u/109358640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/electro199",
"html_url": "https://github.com/electro199",
"followers_url": "https://api.github.com/users/electro199/followers",
"following_url": "https://api.github.com/users/electro199/following{/other_user}",
"gists_url": "https://api.github.com/users/electro199/gists{/gist_id}",
"starred_url": "https://api.github.com/users/electro199/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/electro199/subscriptions",
"organizations_url": "https://api.github.com/users/electro199/orgs",
"repos_url": "https://api.github.com/users/electro199/repos",
"events_url": "https://api.github.com/users/electro199/events{/privacy}",
"received_events_url": "https://api.github.com/users/electro199/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-07-05T22:43:44
| 2024-08-06T22:23:51
| 2024-08-06T22:23:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When starting ollama from start menu tray it starts the service for ollama which immediately loads model under name ollama.exe using around 4.5 GB ram and when using api it starts another program called ollama_llama_server.exe using around 4 GB of ram and almost 3 GB of vram.
The main issue is when ollama starts it should be the olllama_server loading the model not the olllama.exe
In this screenshot i was using ollama through api and i click on ollama then loads the model along side but ignoring the already running server

I am not sure if it is intended behavior or not, also how do I stop ollama from loading the model at startup without turning off the service.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48
|
{
"login": "electro199",
"id": 109358640,
"node_id": "U_kgDOBoSuMA",
"avatar_url": "https://avatars.githubusercontent.com/u/109358640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/electro199",
"html_url": "https://github.com/electro199",
"followers_url": "https://api.github.com/users/electro199/followers",
"following_url": "https://api.github.com/users/electro199/following{/other_user}",
"gists_url": "https://api.github.com/users/electro199/gists{/gist_id}",
"starred_url": "https://api.github.com/users/electro199/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/electro199/subscriptions",
"organizations_url": "https://api.github.com/users/electro199/orgs",
"repos_url": "https://api.github.com/users/electro199/repos",
"events_url": "https://api.github.com/users/electro199/events{/privacy}",
"received_events_url": "https://api.github.com/users/electro199/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5508/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6872
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6872/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6872/comments
|
https://api.github.com/repos/ollama/ollama/issues/6872/events
|
https://github.com/ollama/ollama/issues/6872
| 2,535,358,329
|
I_kwDOJ0Z1Ps6XHn95
| 6,872
|
GGGGG in pesponce
|
{
"login": "air55555",
"id": 2674008,
"node_id": "MDQ6VXNlcjI2NzQwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2674008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/air55555",
"html_url": "https://github.com/air55555",
"followers_url": "https://api.github.com/users/air55555/followers",
"following_url": "https://api.github.com/users/air55555/following{/other_user}",
"gists_url": "https://api.github.com/users/air55555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/air55555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/air55555/subscriptions",
"organizations_url": "https://api.github.com/users/air55555/orgs",
"repos_url": "https://api.github.com/users/air55555/repos",
"events_url": "https://api.github.com/users/air55555/events{/privacy}",
"received_events_url": "https://api.github.com/users/air55555/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-09-19T05:55:54
| 2024-09-25T07:14:55
| 2024-09-25T07:14:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
---- command log
C:\Program Files\Far Manager>ollama run llama3.1:70b >>> name
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
>>> /?
Available Commands:
/set Set session variables
/show Show model information
/load <model> Load a session or model
/save <model> Save your current session
/clear Clear session context
/bye Exit
/?, /help Help for a command
/? shortcuts Help for keyboard shortcuts
Use """ to begin a multi-line message.
>>> /show
Available Commands:
/show info Show details for this model
/show license Show model license
/show modelfile Show Modelfile for this model
/show parameters Show parameters for this model
/show system Show system message
/show template Show prompt template
>>> /show info
Model
parameters 70.6B
quantization Q4_0
arch llama
context length 131072
embedding length 8192
Parameters
stop "<|start_header_id|>"
stop "<|end_header_id|>"
stop "<|eot_id|>"
License
LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
Llama 3.1 Version Release Date: July 23, 2024
---- server log [server.log](https://github.com/user-attachments/files/17054228/server.log)
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.10
|
{
"login": "air55555",
"id": 2674008,
"node_id": "MDQ6VXNlcjI2NzQwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2674008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/air55555",
"html_url": "https://github.com/air55555",
"followers_url": "https://api.github.com/users/air55555/followers",
"following_url": "https://api.github.com/users/air55555/following{/other_user}",
"gists_url": "https://api.github.com/users/air55555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/air55555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/air55555/subscriptions",
"organizations_url": "https://api.github.com/users/air55555/orgs",
"repos_url": "https://api.github.com/users/air55555/repos",
"events_url": "https://api.github.com/users/air55555/events{/privacy}",
"received_events_url": "https://api.github.com/users/air55555/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6872/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2242
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2242/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2242/comments
|
https://api.github.com/repos/ollama/ollama/issues/2242/events
|
https://github.com/ollama/ollama/issues/2242
| 2,104,344,623
|
I_kwDOJ0Z1Ps59bcAv
| 2,242
|
:memo: Documentation > Add `ollama-python` code samples to `llava` model page
|
{
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/followers",
"following_url": "https://api.github.com/users/adriens/following{/other_user}",
"gists_url": "https://api.github.com/users/adriens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adriens/subscriptions",
"organizations_url": "https://api.github.com/users/adriens/orgs",
"repos_url": "https://api.github.com/users/adriens/repos",
"events_url": "https://api.github.com/users/adriens/events{/privacy}",
"received_events_url": "https://api.github.com/users/adriens/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-01-28T20:56:22
| 2024-03-12T21:32:17
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# :grey_question: About
[`ollama` library has been released for Python and javascript](https://twitter.com/ollama/status/1750683461256634409) and makes things even easier to achieve with local AI.
See ths tutorial on how to handle images with `ollama` library : [Ollama has a Python library!](https://youtu.be/JwYwPiOh72w?si=6m41A6VVmUSez2-u&t=152):


# :bulb: The idea
Wouldn't it interesting to add that kind of code to the[ `llava` image page](https://ollama.ai/library/llava) ?
# :moneybag: Benefits
- Promote libraries
- Make it simpler for people to build on top top `ollama`
# :bookmark: Resources
- https://ollama.ai/library/llava
- [Ollama has a Python library!](https://youtu.be/JwYwPiOh72w?si=6m41A6VVmUSez2-u&t=152)
- [`ollama` library has been released for Python and javascript](https://twitter.com/ollama/status/1750683461256634409)
- [Python & JavaScript Libraries](https://ollama.ai/blog/python-javascript-libraries)
- [`ollama-python`](https://github.com/ollama/ollama-python)
-
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2242/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1683
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1683/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1683/comments
|
https://api.github.com/repos/ollama/ollama/issues/1683/events
|
https://github.com/ollama/ollama/pull/1683
| 2,054,587,573
|
PR_kwDOJ0Z1Ps5isAg_
| 1,683
|
Fix windows system memory lookup
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-12-23T00:05:02
| 2024-01-03T17:00:52
| 2024-01-03T17:00:40
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1683",
"html_url": "https://github.com/ollama/ollama/pull/1683",
"diff_url": "https://github.com/ollama/ollama/pull/1683.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1683.patch",
"merged_at": "2024-01-03T17:00:39"
}
|
This refines the gpu package error handling and fixes a bug with the system memory lookup on windows.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1683/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6189
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6189/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6189/comments
|
https://api.github.com/repos/ollama/ollama/issues/6189/events
|
https://github.com/ollama/ollama/pull/6189
| 2,449,610,931
|
PR_kwDOJ0Z1Ps53gOYv
| 6,189
|
Sort Batch Embed Results
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-08-05T22:58:11
| 2024-08-05T23:55:36
| 2024-08-05T23:55:34
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6189",
"html_url": "https://github.com/ollama/ollama/pull/6189",
"diff_url": "https://github.com/ollama/ollama/pull/6189.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6189.patch",
"merged_at": "2024-08-05T23:55:34"
}
|
batch embed returns results out of order, fix by sorting results by task id
Resolves #6187
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6189/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/690
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/690/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/690/comments
|
https://api.github.com/repos/ollama/ollama/issues/690/events
|
https://github.com/ollama/ollama/issues/690
| 1,924,386,973
|
I_kwDOJ0Z1Ps5ys9Cd
| 690
|
Stop Ollama
|
{
"login": "mora-phi",
"id": 8950440,
"node_id": "MDQ6VXNlcjg5NTA0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8950440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mora-phi",
"html_url": "https://github.com/mora-phi",
"followers_url": "https://api.github.com/users/mora-phi/followers",
"following_url": "https://api.github.com/users/mora-phi/following{/other_user}",
"gists_url": "https://api.github.com/users/mora-phi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mora-phi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mora-phi/subscriptions",
"organizations_url": "https://api.github.com/users/mora-phi/orgs",
"repos_url": "https://api.github.com/users/mora-phi/repos",
"events_url": "https://api.github.com/users/mora-phi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mora-phi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 53
| 2023-10-03T15:22:34
| 2025-01-29T08:40:53
| 2023-10-04T07:07:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
How can I stop Ollama ?
If I run with "ollama run llama2" for instance and then quit with "Ctrl-C", then go to http://127.0.0.1:11434/ in a browser, it shows "Ollama is running"
When I kill the running process with a kill -9, a new process is instantly spawned.
Therefore I don't know how to totally stop Ollama...
(I'm on macos)
|
{
"login": "mora-phi",
"id": 8950440,
"node_id": "MDQ6VXNlcjg5NTA0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8950440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mora-phi",
"html_url": "https://github.com/mora-phi",
"followers_url": "https://api.github.com/users/mora-phi/followers",
"following_url": "https://api.github.com/users/mora-phi/following{/other_user}",
"gists_url": "https://api.github.com/users/mora-phi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mora-phi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mora-phi/subscriptions",
"organizations_url": "https://api.github.com/users/mora-phi/orgs",
"repos_url": "https://api.github.com/users/mora-phi/repos",
"events_url": "https://api.github.com/users/mora-phi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mora-phi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/690/reactions",
"total_count": 35,
"+1": 35,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/690/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3001
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3001/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3001/comments
|
https://api.github.com/repos/ollama/ollama/issues/3001/events
|
https://github.com/ollama/ollama/issues/3001
| 2,175,789,913
|
I_kwDOJ0Z1Ps6Br-tZ
| 3,001
|
pull model fail:read: connection reset by peer
|
{
"login": "QiuZiXian",
"id": 28780269,
"node_id": "MDQ6VXNlcjI4NzgwMjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/28780269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QiuZiXian",
"html_url": "https://github.com/QiuZiXian",
"followers_url": "https://api.github.com/users/QiuZiXian/followers",
"following_url": "https://api.github.com/users/QiuZiXian/following{/other_user}",
"gists_url": "https://api.github.com/users/QiuZiXian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QiuZiXian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QiuZiXian/subscriptions",
"organizations_url": "https://api.github.com/users/QiuZiXian/orgs",
"repos_url": "https://api.github.com/users/QiuZiXian/repos",
"events_url": "https://api.github.com/users/QiuZiXian/events{/privacy}",
"received_events_url": "https://api.github.com/users/QiuZiXian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-08T10:57:03
| 2024-03-11T03:33:14
| 2024-03-11T03:33:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
env: linux
I has installed ollama success
but try ollama pull qwen:7b ,it fail like:
```bash
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=bW98ekNaedsqamdcHp8ziQ&scope=repository%!A(MISSING)library%!F(MISSING)qwen%!A(MISSING)pull&service=ollama.com&ts=1709894898": read tcp 192.168.26.128:44940->34.120.132.20:443: read: connection reset by peer
```
ping 34.120.132.20 is ok;
I try set proxy ,but not work
please help me?
give me a method of pulling model or build from https://huggingface.co/Qwen/Qwen1.5-7B-Chat ; thanks!
```bash
(I has download from https://huggingface.co/Qwen/Qwen1.5-7B-Chat ;
vim Modelfile; set From .//Qwen1.5-7B-Chat; error : transferring model data
Error: read /usr/share/ollama/.ollama/models/Qwen1.5-7B-Chat: is a directory)
|
{
"login": "QiuZiXian",
"id": 28780269,
"node_id": "MDQ6VXNlcjI4NzgwMjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/28780269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QiuZiXian",
"html_url": "https://github.com/QiuZiXian",
"followers_url": "https://api.github.com/users/QiuZiXian/followers",
"following_url": "https://api.github.com/users/QiuZiXian/following{/other_user}",
"gists_url": "https://api.github.com/users/QiuZiXian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QiuZiXian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QiuZiXian/subscriptions",
"organizations_url": "https://api.github.com/users/QiuZiXian/orgs",
"repos_url": "https://api.github.com/users/QiuZiXian/repos",
"events_url": "https://api.github.com/users/QiuZiXian/events{/privacy}",
"received_events_url": "https://api.github.com/users/QiuZiXian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3001/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2513
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2513/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2513/comments
|
https://api.github.com/repos/ollama/ollama/issues/2513/events
|
https://github.com/ollama/ollama/issues/2513
| 2,136,609,829
|
I_kwDOJ0Z1Ps5_WhQl
| 2,513
|
ECONNREFUSED error
|
{
"login": "jakobhoeg",
"id": 114422072,
"node_id": "U_kgDOBtHxOA",
"avatar_url": "https://avatars.githubusercontent.com/u/114422072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakobhoeg",
"html_url": "https://github.com/jakobhoeg",
"followers_url": "https://api.github.com/users/jakobhoeg/followers",
"following_url": "https://api.github.com/users/jakobhoeg/following{/other_user}",
"gists_url": "https://api.github.com/users/jakobhoeg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakobhoeg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakobhoeg/subscriptions",
"organizations_url": "https://api.github.com/users/jakobhoeg/orgs",
"repos_url": "https://api.github.com/users/jakobhoeg/repos",
"events_url": "https://api.github.com/users/jakobhoeg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakobhoeg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 7706485225,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eh6Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/js",
"name": "js",
"color": "5F50E3",
"default": false,
"description": "relating to the ollama-js client library"
}
] |
closed
| false
| null |
[] | null | 16
| 2024-02-15T14:01:26
| 2024-12-19T21:35:03
| 2024-12-19T21:35:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Keep getting ECONNREFUSED error when trying to use Ollama for my NextJS frontend in production:
```
⨯ TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:11730:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async globalThis.fetch (/var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:36091)
at async s (/var/task/.next/server/app/api/model/route.js:1:491)
at async /var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:42484
at async eI.execute (/var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:32486)
at async eI.handle (/var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:43737)
at async Y (/var/task/node_modules/next/dist/compiled/next-server/server.runtime.prod.js:16:24556)
at async Q.responseCache.get.routeKind (/var/task/node_modules/next/dist/compiled/next-server/server.runtime.prod.js:17:1025)
at async r3.renderToResponseWithComponentsImpl (/var/task/node_modules/next/dist/compiled/next-server/server.runtime.prod.js:17:507) {
cause: Error: connect ECONNREFUSED 127.0.0.1:11434
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16)
at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:128:17) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 11434
}
}
```
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2513/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8339
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8339/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8339/comments
|
https://api.github.com/repos/ollama/ollama/issues/8339/events
|
https://github.com/ollama/ollama/issues/8339
| 2,773,453,511
|
I_kwDOJ0Z1Ps6lT4rH
| 8,339
|
`GIN_MODE` not able to set to `release`
|
{
"login": "leobenkel",
"id": 4960573,
"node_id": "MDQ6VXNlcjQ5NjA1NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4960573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leobenkel",
"html_url": "https://github.com/leobenkel",
"followers_url": "https://api.github.com/users/leobenkel/followers",
"following_url": "https://api.github.com/users/leobenkel/following{/other_user}",
"gists_url": "https://api.github.com/users/leobenkel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leobenkel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leobenkel/subscriptions",
"organizations_url": "https://api.github.com/users/leobenkel/orgs",
"repos_url": "https://api.github.com/users/leobenkel/repos",
"events_url": "https://api.github.com/users/leobenkel/events{/privacy}",
"received_events_url": "https://api.github.com/users/leobenkel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 13
| 2025-01-07T18:11:06
| 2025-01-28T12:35:01
| 2025-01-16T00:01:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello,
in `v0.5.4` , I am not able to set the `GIN_MODE` to release:
I have
```log
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
```
in the log.
But in my `env`:
```.env
GIN_MODE=release
```
it is running in a docker image:
```Dockerfile
ARG GIN_MODE=debug
ENV GIN_MODE=$GIN_MODE
```
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8339/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/164
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/164/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/164/comments
|
https://api.github.com/repos/ollama/ollama/issues/164/events
|
https://github.com/ollama/ollama/pull/164
| 1,816,106,038
|
PR_kwDOJ0Z1Ps5WHT6H
| 164
|
restart server more gracefully
|
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyeva/followers",
"following_url": "https://api.github.com/users/hoyyeva/following{/other_user}",
"gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions",
"organizations_url": "https://api.github.com/users/hoyyeva/orgs",
"repos_url": "https://api.github.com/users/hoyyeva/repos",
"events_url": "https://api.github.com/users/hoyyeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoyyeva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-21T16:49:58
| 2023-07-22T22:19:26
| 2023-07-22T22:19:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/164",
"html_url": "https://github.com/ollama/ollama/pull/164",
"diff_url": "https://github.com/ollama/ollama/pull/164.diff",
"patch_url": "https://github.com/ollama/ollama/pull/164.patch",
"merged_at": "2023-07-22T22:19:22"
}
|
fix for https://github.com/jmorganca/ollama/issues/154
|
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyeva/followers",
"following_url": "https://api.github.com/users/hoyyeva/following{/other_user}",
"gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions",
"organizations_url": "https://api.github.com/users/hoyyeva/orgs",
"repos_url": "https://api.github.com/users/hoyyeva/repos",
"events_url": "https://api.github.com/users/hoyyeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoyyeva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/164/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5652
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5652/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5652/comments
|
https://api.github.com/repos/ollama/ollama/issues/5652/events
|
https://github.com/ollama/ollama/issues/5652
| 2,405,934,048
|
I_kwDOJ0Z1Ps6PZ6Pg
| 5,652
|
A path to GPU support for Ollama in a VM/container on Apple Silicon
|
{
"login": "easp",
"id": 414705,
"node_id": "MDQ6VXNlcjQxNDcwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/414705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/easp",
"html_url": "https://github.com/easp",
"followers_url": "https://api.github.com/users/easp/followers",
"following_url": "https://api.github.com/users/easp/following{/other_user}",
"gists_url": "https://api.github.com/users/easp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/easp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/easp/subscriptions",
"organizations_url": "https://api.github.com/users/easp/orgs",
"repos_url": "https://api.github.com/users/easp/repos",
"events_url": "https://api.github.com/users/easp/events{/privacy}",
"received_events_url": "https://api.github.com/users/easp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-07-12T16:12:54
| 2024-07-15T19:04:38
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I came across a short discussion in the llama.cpp repo about using a [GPU on Apple Silicon from within a vm/container](https://github.com/ggerganov/llama.cpp/discussions/8042). Performance isn't as good as bare metal, but it's a significant improvement over CPU-only inference.
The meat of it is [this work](https://sinrega.org/2024-03-06-enabling-containers-gpu-macos/) from March which provides a virtualized Vulkan interface inside the VM which is then passed-through to a vulkan-to-metal layer on the host MacOS.
For this to be relevant to Ollama I think it would require this proof of concept work with podman to be implemented in Docker Desktop for Mac, but I'm noting it here because it shows a way forward.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5652/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4324
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4324/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4324/comments
|
https://api.github.com/repos/ollama/ollama/issues/4324/events
|
https://github.com/ollama/ollama/pull/4324
| 2,290,412,684
|
PR_kwDOJ0Z1Ps5vIptv
| 4,324
|
update go deps
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-10T21:04:59
| 2024-05-11T17:15:28
| 2024-05-11T04:39:27
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4324",
"html_url": "https://github.com/ollama/ollama/pull/4324",
"diff_url": "https://github.com/ollama/ollama/pull/4324.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4324.patch",
"merged_at": "2024-05-11T04:39:27"
}
|
Fixes #4297
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4324/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4643
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4643/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4643/comments
|
https://api.github.com/repos/ollama/ollama/issues/4643/events
|
https://github.com/ollama/ollama/issues/4643
| 2,317,406,597
|
I_kwDOJ0Z1Ps6KINGF
| 4,643
|
Llama.cpp now supports distributed inference across multiple machines.
|
{
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.github.com/users/AncientMystic/followers",
"following_url": "https://api.github.com/users/AncientMystic/following{/other_user}",
"gists_url": "https://api.github.com/users/AncientMystic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AncientMystic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AncientMystic/subscriptions",
"organizations_url": "https://api.github.com/users/AncientMystic/orgs",
"repos_url": "https://api.github.com/users/AncientMystic/repos",
"events_url": "https://api.github.com/users/AncientMystic/events{/privacy}",
"received_events_url": "https://api.github.com/users/AncientMystic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 20
| 2024-05-26T02:57:20
| 2025-01-13T18:35:27
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Llama.cpp now supports distribution across multiple devices to boost speeds, this would be a great addition to Ollama
https://github.com/ggerganov/llama.cpp/tree/master/examples/rpc
https://www.reddit.com/r/LocalLLaMA/comments/1cyzi9e/llamacpp_now_supports_distributed_inference/
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4643/reactions",
"total_count": 29,
"+1": 28,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/4643/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8172
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8172/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8172/comments
|
https://api.github.com/repos/ollama/ollama/issues/8172/events
|
https://github.com/ollama/ollama/pull/8172
| 2,749,943,664
|
PR_kwDOJ0Z1Ps6Fwy8g
| 8,172
|
Updated the version of golang/crypto package to handle CVE-2024-45337
|
{
"login": "Squishedmac",
"id": 88924339,
"node_id": "MDQ6VXNlcjg4OTI0MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/88924339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Squishedmac",
"html_url": "https://github.com/Squishedmac",
"followers_url": "https://api.github.com/users/Squishedmac/followers",
"following_url": "https://api.github.com/users/Squishedmac/following{/other_user}",
"gists_url": "https://api.github.com/users/Squishedmac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Squishedmac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Squishedmac/subscriptions",
"organizations_url": "https://api.github.com/users/Squishedmac/orgs",
"repos_url": "https://api.github.com/users/Squishedmac/repos",
"events_url": "https://api.github.com/users/Squishedmac/events{/privacy}",
"received_events_url": "https://api.github.com/users/Squishedmac/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-12-19T10:56:32
| 2024-12-20T17:30:07
| 2024-12-20T17:29:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8172",
"html_url": "https://github.com/ollama/ollama/pull/8172",
"diff_url": "https://github.com/ollama/ollama/pull/8172.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8172.patch",
"merged_at": "2024-12-20T17:29:30"
}
|
Fixed the CVE-2024-45337 severity on wiz scan for the docker image, changing the version of golang.org/x/crypto to the latest version should fix the issue.
This issue is a hinderance that prevents the base image form being used in any real production scenario.
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8172/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4816
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4816/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4816/comments
|
https://api.github.com/repos/ollama/ollama/issues/4816/events
|
https://github.com/ollama/ollama/issues/4816
| 2,333,788,438
|
I_kwDOJ0Z1Ps6LGskW
| 4,816
|
deepseek v2 lite and 236b gets crazy after some conversations
|
{
"login": "netspym",
"id": 74223710,
"node_id": "MDQ6VXNlcjc0MjIzNzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/74223710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/netspym",
"html_url": "https://github.com/netspym",
"followers_url": "https://api.github.com/users/netspym/followers",
"following_url": "https://api.github.com/users/netspym/following{/other_user}",
"gists_url": "https://api.github.com/users/netspym/gists{/gist_id}",
"starred_url": "https://api.github.com/users/netspym/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/netspym/subscriptions",
"organizations_url": "https://api.github.com/users/netspym/orgs",
"repos_url": "https://api.github.com/users/netspym/repos",
"events_url": "https://api.github.com/users/netspym/events{/privacy}",
"received_events_url": "https://api.github.com/users/netspym/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-06-04T15:12:09
| 2024-06-05T07:24:59
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi Ollama team:
I loaded the modes and keep them stay in memory. After some conversations, the models start to response with strange text and codes. After restarting ollama in ubuntu, the issue is gone.
### OS
Linux
### GPU
_No response_
### CPU
AMD
### Ollama version
0.1.40
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4816/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4816/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1416
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1416/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1416/comments
|
https://api.github.com/repos/ollama/ollama/issues/1416/events
|
https://github.com/ollama/ollama/issues/1416
| 2,030,941,798
|
I_kwDOJ0Z1Ps55DbZm
| 1,416
|
Attempting to load a model smaller than 10GiB into 12.2GiB GPU results in failing over to load into the host RAM.
|
{
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"repos_url": "https://api.github.com/users/phalexo/repos",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2023-12-07T14:59:04
| 2024-03-12T16:47:48
| 2024-03-12T16:44:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have converted losslessmegacoder-llama2-13b-min.Q6_K.model to ollama format.
On my attempt to load, it reports the size of the model < 10GiB, but as I do "ollama run losslessmegacoder-llama2-13b-min.Q6_K" it attempts to load it into a GPU, apparently runs out of VRAM and loads into the host instead.
If the model is smaller than 10GiB, why is it using additional 2.2GiB and is there anything I can do to mitigate this?
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1416/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5841
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5841/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5841/comments
|
https://api.github.com/repos/ollama/ollama/issues/5841/events
|
https://github.com/ollama/ollama/issues/5841
| 2,421,834,741
|
I_kwDOJ0Z1Ps6QWkP1
| 5,841
|
Manage internlm2 models
|
{
"login": "RunningLeon",
"id": 28671653,
"node_id": "MDQ6VXNlcjI4NjcxNjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/28671653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RunningLeon",
"html_url": "https://github.com/RunningLeon",
"followers_url": "https://api.github.com/users/RunningLeon/followers",
"following_url": "https://api.github.com/users/RunningLeon/following{/other_user}",
"gists_url": "https://api.github.com/users/RunningLeon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RunningLeon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RunningLeon/subscriptions",
"organizations_url": "https://api.github.com/users/RunningLeon/orgs",
"repos_url": "https://api.github.com/users/RunningLeon/repos",
"events_url": "https://api.github.com/users/RunningLeon/events{/privacy}",
"received_events_url": "https://api.github.com/users/RunningLeon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-07-22T03:28:41
| 2024-09-04T03:43:43
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi, maintainers of ollama. I'm one of the developers of [internlm2](https://huggingface.co/internlm) models and there is a bug with https://ollama.com/library/internlm2 model as referred in the issue #5761. I've uploaded some models in my account as in https://ollama.com/internlm/internlm2.5. Now I wonder how to update the [model](https://ollama.com/library/internlm2) uploaded by ollama maintainers?
Thanks.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5841/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5841/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2573
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2573/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2573/comments
|
https://api.github.com/repos/ollama/ollama/issues/2573/events
|
https://github.com/ollama/ollama/issues/2573
| 2,140,950,675
|
I_kwDOJ0Z1Ps5_nFCT
| 2,573
|
Ollama for Windows, seems to crash on GPU with cuda
|
{
"login": "amonpaike",
"id": 884282,
"node_id": "MDQ6VXNlcjg4NDI4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/884282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amonpaike",
"html_url": "https://github.com/amonpaike",
"followers_url": "https://api.github.com/users/amonpaike/followers",
"following_url": "https://api.github.com/users/amonpaike/following{/other_user}",
"gists_url": "https://api.github.com/users/amonpaike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amonpaike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amonpaike/subscriptions",
"organizations_url": "https://api.github.com/users/amonpaike/orgs",
"repos_url": "https://api.github.com/users/amonpaike/repos",
"events_url": "https://api.github.com/users/amonpaike/events{/privacy}",
"received_events_url": "https://api.github.com/users/amonpaike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-18T11:46:37
| 2024-02-19T20:55:07
| 2024-02-19T20:55:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
My GTX 1060 gpu has little vram, 3GB but with the smaller models **it works great and on native linux it is super fast** with for example tinyllama stablelm2 phi2 etc.
**On windows with cuda it seems to crash. I start a model with for example "ollama run stablelm2" and after a few seconds it crashes.**
On the CPU even if my cpu only uses AVX. It's slow but seems to work well. To start a model on CPU I must first start some app that consumes all the GPU VRAM, and olllama starts on CPU. (It's a trick I used due to lack of knowledge.)
Analyzing the log from the "ollama serve" command, it seems that there are some paths that aren't there on my computer:
**C:/Program Files/Go/ etc...
and
C:/Users/jeff/git/ollama/ etc...**
I don't know if the crash depends on this, in my little knowledge it's the only thing that seems anomalous to me.
I am on latest nvidia drivers available on windows: 531.79
On other llm management apps on windows 11 like **llm studo** and **text-generation-webu**i I have no problems getting the models to work on the GPU.
```
C:\Users\noki>ollama serve
time=2024-02-18T12:16:23.109+01:00 level=INFO source=images.go:706 msg="total blobs: `10"`
time=2024-02-18T12:16:23.233+01:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0"
time=2024-02-18T12:16:23.286+01:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)"
time=2024-02-18T12:16:23.306+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-18T12:16:23.937+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cpu cuda_v11.3]"
[GIN] 2024/02/18 - 12:16:58 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/02/18 - 12:16:58 | 200 | 3.832ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/02/18 - 12:16:58 | 200 | 2.2639ms | 127.0.0.1 | POST "/api/show"
time=2024-02-18T12:17:01.899+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-18T12:17:01.899+01:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library nvml.dll"
time=2024-02-18T12:17:01.909+01:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll]"
time=2024-02-18T12:17:01.945+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-02-18T12:17:01.947+01:00 level=INFO source=cpu_common.go:15 msg="CPU has AVX"
time=2024-02-18T12:17:01.953+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1"
time=2024-02-18T12:17:01.953+01:00 level=INFO source=cpu_common.go:15 msg="CPU has AVX"
time=2024-02-18T12:17:01.953+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1"
time=2024-02-18T12:17:01.953+01:00 level=INFO source=cpu_common.go:15 msg="CPU has AVX"
time=2024-02-18T12:17:01.954+01:00 level=INFO source=dyn_ext_server.go:380 msg="Updating PATH to C:\\Users\\noki\\AppData\\Local\\Temp\\ollama1799801675\\cuda_v11.3;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files (x86)\\ATI Technologies\\ATI.ACE\\Core-Static;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\dotnet\\;C:\\Program Files\\gsudo\\Current;C:\\Users\\noki\\AppData\\Local\\Microsoft\\WindowsApps;C:\\adb;C:\\Users\\noki\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\;C:\\Users\\noki\\AppData\\Local\\Programs\\Python\\Python311\\;C:\\Users\\noki\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Users\\noki\\AppData\\Local\\Programs\\Ollama"
loading library C:\Users\noki\AppData\Local\Temp\ollama1799801675\cuda_v11.3\ext_server.dll
time=2024-02-18T12:17:02.756+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\noki\\AppData\\Local\\Temp\\ollama1799801675\\cuda_v11.3\\ext_server.dll"
time=2024-02-18T12:17:02.756+01:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 1060 3GB, compute capability 6.1, VMM: yes
Exception 0xc000001d 0x0 0x0 0x7ffa57df99dd
PC=0x7ffa57df99dd
signal arrived during external code execution
runtime.cgocall(0x7ff6cdf553c0, 0xc0001a6758)
C:/Program Files/Go/src/runtime/cgocall.go:157 +0x3e fp=0xc0001a6730 sp=0xc0001a66f8 pc=0x7ff6cd9691be
github.com/jmorganca/ollama/llm._Cfunc_dyn_llama_server_init({0x7ffa57db0000, 0x7ffa57e3a1e0, 0x7ffa57e3ae00, 0x7ffa57e3ae80, 0x7ffa57e38660, 0x7ffa57e38820, 0x7ffa57e387a0, 0x7ffa57e3ade0, 0x7ffa57e3b540, 0x7ffa57e39560, ...}, ...)
_cgo_gotypes.go:282 +0x4d fp=0xc0001a6758 sp=0xc0001a6730 pc=0x7ff6cdd58b8d
github.com/jmorganca/ollama/llm.newDynExtServer.func7(0xc0004ac140, 0xc00058e510)
C:/Users/jeff/git/ollama/llm/dyn_ext_server.go:148 +0x112 fp=0xc0001a6898 sp=0xc0001a6758 pc=0x7ff6cdd5a2d2
github.com/jmorganca/ollama/llm.newDynExtServer({0xc0000cb700, 0x4b}, {0xc00022a180, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
C:/Users/jeff/git/ollama/llm/dyn_ext_server.go:148 +0xac5 fp=0xc0001a6ae0 sp=0xc0001a6898 pc=0x7ff6cdd59f05
github.com/jmorganca/ollama/llm.newLlmServer({{_, _, _}, {_, _}, {_, _}}, {_, _}, {0xc00022a180, ...}, ...)
C:/Users/jeff/git/ollama/llm/llm.go:158 +0x4c5 fp=0xc0001a6ca0 sp=0xc0001a6ae0 pc=0x7ff6cdd563c5
github.com/jmorganca/ollama/llm.New({0xc0000208c0, 0x31}, {0xc00022a180, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
C:/Users/jeff/git/ollama/llm/llm.go:123 +0x78e fp=0xc0001a6f20 sp=0xc0001a6ca0 pc=0x7ff6cdd55cae
github.com/jmorganca/ollama/server.load(0xc00017e000, 0xc0001cc000, {{0x0, 0x800, 0x200, 0x1, 0xffffffffffffffff, 0x0, 0x0, 0x1, ...}, ...}, ...)
C:/Users/jeff/git/ollama/server/routes.go:85 +0x3a9 fp=0xc0001a7090 sp=0xc0001a6f20 pc=0x7ff6cdf29c69
github.com/jmorganca/ollama/server.ChatHandler(0xc00017e000)
C:/Users/jeff/git/ollama/server/routes.go:1163 +0x60b fp=0xc0001a7770 sp=0xc0001a7090 pc=0x7ff6cdf34ccb
github.com/gin-gonic/gin.(*Context).Next(...)
C:/Users/jeff/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc00017e000)
C:/Users/jeff/git/ollama/server/routes.go:938 +0x68 fp=0xc0001a77a8 sp=0xc0001a7770 pc=0x7ff6cdf338a8
github.com/gin-gonic/gin.(*Context).Next(...)
C:/Users/jeff/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc00017e000)
C:/Users/jeff/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 +0x7a fp=0xc0001a77f8 sp=0xc0001a77a8 pc=0x7ff6cdf0afda
github.com/gin-gonic/gin.(*Context).Next(...)
C:/Users/jeff/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc00017e000)
C:/Users/jeff/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 +0xdd fp=0xc0001a79a8 sp=0xc0001a77f8 pc=0x7ff6cdf0a11d
github.com/gin-gonic/gin.(*Context).Next(...)
C:/Users/jeff/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc0004d41a0, 0xc00017e000)
C:/Users/jeff/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 +0x66e fp=0xc0001a7b28 sp=0xc0001a79a8 pc=0x7ff6cdf0964e
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc0004d41a0, {0x7ff6cf3e5f00, 0xc00016c380}, 0xc000166360)
C:/Users/jeff/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 +0x1b2 fp=0xc0001a7b60 sp=0xc0001a7b28 pc=0x7ff6cdf08e12
net/http.serverHandler.ServeHTTP({0x7ff6cf3e41c0?}, {0x7ff6cf3e5f00?, 0xc00016c380?}, 0x6?)
C:/Program Files/Go/src/net/http/server.go:3137 +0x8e fp=0xc0001a7b90 sp=0xc0001a7b60 pc=0x7ff6cdc5968e
net/http.(*conn).serve(0xc00016ac60, {0x7ff6cf3e7568, 0xc0004aa3c0})
C:/Program Files/Go/src/net/http/server.go:2039 +0x5e8 fp=0xc0001a7fb8 sp=0xc0001a7b90 pc=0x7ff6cdc54a48
net/http.(*Server).Serve.gowrap3()
C:/Program Files/Go/src/net/http/server.go:3285 +0x28 fp=0xc0001a7fe0 sp=0xc0001a7fb8 pc=0x7ff6cdc59ea8
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0001a7fe8 sp=0xc0001a7fe0 pc=0x7ff6cd9d2081
created by net/http.(*Server).Serve in goroutine 1
C:/Program Files/Go/src/net/http/server.go:3285 +0x4b4
goroutine 1 gp=0xc000058000 m=nil [IO wait]:
runtime.gopark(0xc000065008?, 0x7ff6cdfb6560?, 0xa0?, 0x16?, 0xc0001016d0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00051b6f8 sp=0xc00051b6d8 pc=0x7ff6cd9a12ee
runtime.netpollblock(0x1e4?, 0xcd968cc6?, 0xf6?)
C:/Program Files/Go/src/runtime/netpoll.go:573 +0xf7 fp=0xc00051b730 sp=0xc00051b6f8 pc=0x7ff6cd998c37
internal/poll.runtime_pollWait(0x1b97d601ee0, 0x72)
C:/Program Files/Go/src/runtime/netpoll.go:345 +0x85 fp=0xc00051b750 sp=0xc00051b730 pc=0x7ff6cd9cbb25
internal/poll.(*pollDesc).wait(0xc000065408?, 0x0?, 0x0)
C:/Program Files/Go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00051b778 sp=0xc00051b750 pc=0x7ff6cda63bc7
internal/poll.execIO(0xc0001016a0, 0xc00051b818)
C:/Program Files/Go/src/internal/poll/fd_windows.go:175 +0xe6 fp=0xc00051b7e8 sp=0xc00051b778 pc=0x7ff6cda650a6
internal/poll.(*FD).acceptOne(0xc000101688, 0x284, {0xc0004b80f0?, 0x0?, 0x0?}, 0xc000065008?)
C:/Program Files/Go/src/internal/poll/fd_windows.go:944 +0x67 fp=0xc00051b848 sp=0xc00051b7e8 pc=0x7ff6cda69767
internal/poll.(*FD).Accept(0xc000101688, 0xc00051b9f8)
C:/Program Files/Go/src/internal/poll/fd_windows.go:978 +0x1bc fp=0xc00051b900 sp=0xc00051b848 pc=0x7ff6cda69a9c
net.(*netFD).accept(0xc000101688)
C:/Program Files/Go/src/net/fd_windows.go:178 +0x54 fp=0xc00051ba18 sp=0xc00051b900 pc=0x7ff6cdae50d4
net.(*TCPListener).accept(0xc000436420)
C:/Program Files/Go/src/net/tcpsock_posix.go:159 +0x1e fp=0xc00051ba40 sp=0xc00051ba18 pc=0x7ff6cdafa57e
net.(*TCPListener).Accept(0xc000436420)
C:/Program Files/Go/src/net/tcpsock.go:327 +0x30 fp=0xc00051ba70 sp=0xc00051ba40 pc=0x7ff6cdaf9670
net/http.(*onceCloseListener).Accept(0xc00016ac60?)
<autogenerated>:1 +0x24 fp=0xc00051ba88 sp=0xc00051ba70 pc=0x7ff6cdc7c064
net/http.(*Server).Serve(0xc0001d4000, {0x7ff6cf3e5cf0, 0xc000436420})
C:/Program Files/Go/src/net/http/server.go:3255 +0x33e fp=0xc00051bbb8 sp=0xc00051ba88 pc=0x7ff6cdc59abe
github.com/jmorganca/ollama/server.Serve({0x7ff6cf3e5cf0, 0xc000436420})
C:/Users/jeff/git/ollama/server/routes.go:1041 +0x46f fp=0xc00051bcc0 sp=0xc00051bbb8 pc=0x7ff6cdf33d6f
github.com/jmorganca/ollama/cmd.RunServer(0xc0004ceb00?, {0x7ff6cf8718a0?, 0x4?, 0x7ff6ce0f0f6b?})
C:/Users/jeff/git/ollama/cmd/cmd.go:706 +0x1a5 fp=0xc00051bd58 sp=0xc00051bcc0 pc=0x7ff6cdf4cbc5
github.com/spf13/cobra.(*Command).execute(0xc00001a908, {0x7ff6cf8718a0, 0x0, 0x0})
C:/Users/jeff/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x882 fp=0xc00051be78 sp=0xc00051bd58 pc=0x7ff6cdcf5462
github.com/spf13/cobra.(*Command).ExecuteC(0xc0000f7808)
C:/Users/jeff/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc00051bf30 sp=0xc00051be78 pc=0x7ff6cdcf5ca5
github.com/spf13/cobra.(*Command).Execute(...)
C:/Users/jeff/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
C:/Users/jeff/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
C:/Users/jeff/git/ollama/main.go:11 +0x4d fp=0xc00051bf50 sp=0xc00051bf30 pc=0x7ff6cdf5512d
runtime.main()
C:/Program Files/Go/src/runtime/proc.go:271 +0x28b fp=0xc00051bfe0 sp=0xc00051bf50 pc=0x7ff6cd9a0eeb
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00051bfe8 sp=0xc00051bfe0 pc=0x7ff6cd9d2081
goroutine 2 gp=0xc000058700 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00005bfa8 sp=0xc00005bf88 pc=0x7ff6cd9a12ee
runtime.goparkunlock(...)
C:/Program Files/Go/src/runtime/proc.go:408
runtime.forcegchelper()
C:/Program Files/Go/src/runtime/proc.go:326 +0xb8 fp=0xc00005bfe0 sp=0xc00005bfa8 pc=0x7ff6cd9a1178
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00005bfe8 sp=0xc00005bfe0 pc=0x7ff6cd9d2081
created by runtime.init.6 in goroutine 1
C:/Program Files/Go/src/runtime/proc.go:314 +0x1a
goroutine 3 gp=0xc000058a80 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00005df80 sp=0xc00005df60 pc=0x7ff6cd9a12ee
runtime.goparkunlock(...)
C:/Program Files/Go/src/runtime/proc.go:408
runtime.bgsweep(0xc000028070)
C:/Program Files/Go/src/runtime/mgcsweep.go:318 +0xdf fp=0xc00005dfc8 sp=0xc00005df80 pc=0x7ff6cd98b43f
runtime.gcenable.gowrap1()
C:/Program Files/Go/src/runtime/mgc.go:203 +0x25 fp=0xc00005dfe0 sp=0xc00005dfc8 pc=0x7ff6cd97fce5
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00005dfe8 sp=0xc00005dfe0 pc=0x7ff6cd9d2081
created by runtime.gcenable in goroutine 1
C:/Program Files/Go/src/runtime/mgc.go:203 +0x66
goroutine 4 gp=0xc000058c40 m=nil [GC scavenge wait]:
runtime.gopark(0xe8ab48?, 0x7e6c93?, 0x0?, 0x0?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00006df78 sp=0xc00006df58 pc=0x7ff6cd9a12ee
runtime.goparkunlock(...)
C:/Program Files/Go/src/runtime/proc.go:408
runtime.(*scavengerState).park(0x7ff6cf7eaae0)
C:/Program Files/Go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc00006dfa8 sp=0xc00006df78 pc=0x7ff6cd988dc9
runtime.bgscavenge(0xc000028070)
C:/Program Files/Go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc00006dfc8 sp=0xc00006dfa8 pc=0x7ff6cd989379
runtime.gcenable.gowrap2()
C:/Program Files/Go/src/runtime/mgc.go:204 +0x25 fp=0xc00006dfe0 sp=0xc00006dfc8 pc=0x7ff6cd97fc85
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00006dfe8 sp=0xc00006dfe0 pc=0x7ff6cd9d2081
created by runtime.gcenable in goroutine 1
C:/Program Files/Go/src/runtime/mgc.go:204 +0xa5
goroutine 5 gp=0xc000059180 m=nil [finalizer wait]:
runtime.gopark(0xc00005fe48?, 0x7ff6cd9730a5?, 0xa8?, 0x1?, 0xc000058000?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00005fe20 sp=0xc00005fe00 pc=0x7ff6cd9a12ee
runtime.runfinq()
C:/Program Files/Go/src/runtime/mfinal.go:194 +0x107 fp=0xc00005ffe0 sp=0xc00005fe20 pc=0x7ff6cd97ed67
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00005ffe8 sp=0xc00005ffe0 pc=0x7ff6cd9d2081
created by runtime.createfing in goroutine 1
C:/Program Files/Go/src/runtime/mfinal.go:164 +0x3d
goroutine 18 gp=0xc000084fc0 m=nil [GC worker (idle)]:
runtime.gopark(0x261f3dcbb704?, 0x1?, 0xfc?, 0xfa?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000069f50 sp=0xc000069f30 pc=0x7ff6cd9a12ee
runtime.gcBgMarkWorker()
C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc000069fe0 sp=0xc000069f50 pc=0x7ff6cd981e25
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000069fe8 sp=0xc000069fe0 pc=0x7ff6cd9d2081
created by runtime.gcBgMarkStartWorkers in goroutine 1
C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c
goroutine 19 gp=0xc000085180 m=nil [GC worker (idle)]:
runtime.gopark(0x261f3d139e80?, 0x3?, 0x0?, 0x0?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00006bf50 sp=0xc00006bf30 pc=0x7ff6cd9a12ee
runtime.gcBgMarkWorker()
C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc00006bfe0 sp=0xc00006bf50 pc=0x7ff6cd981e25
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00006bfe8 sp=0xc00006bfe0 pc=0x7ff6cd9d2081
created by runtime.gcBgMarkStartWorkers in goroutine 1
C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c
goroutine 34 gp=0xc000482000 m=nil [GC worker (idle)]:
runtime.gopark(0x261f3dcbb704?, 0x3?, 0xfc?, 0xfa?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000489f50 sp=0xc000489f30 pc=0x7ff6cd9a12ee
runtime.gcBgMarkWorker()
C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc000489fe0 sp=0xc000489f50 pc=0x7ff6cd981e25
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000489fe8 sp=0xc000489fe0 pc=0x7ff6cd9d2081
created by runtime.gcBgMarkStartWorkers in goroutine 1
C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c
goroutine 35 gp=0xc0004821c0 m=nil [GC worker (idle)]:
runtime.gopark(0x261f3dcbb704?, 0x3?, 0xf8?, 0xca?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00048bf50 sp=0xc00048bf30 pc=0x7ff6cd9a12ee
runtime.gcBgMarkWorker()
C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc00048bfe0 sp=0xc00048bf50 pc=0x7ff6cd981e25
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00048bfe8 sp=0xc00048bfe0 pc=0x7ff6cd9d2081
created by runtime.gcBgMarkStartWorkers in goroutine 1
C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c
goroutine 36 gp=0xc000482380 m=nil [GC worker (idle)]:
runtime.gopark(0x261f3dcbb704?, 0x3?, 0x0?, 0x0?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000485f50 sp=0xc000485f30 pc=0x7ff6cd9a12ee
runtime.gcBgMarkWorker()
C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc000485fe0 sp=0xc000485f50 pc=0x7ff6cd981e25
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000485fe8 sp=0xc000485fe0 pc=0x7ff6cd9d2081
created by runtime.gcBgMarkStartWorkers in goroutine 1
C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c
goroutine 37 gp=0xc000482540 m=nil [GC worker (idle)]:
runtime.gopark(0x261f3dcbb704?, 0x3?, 0xfc?, 0xfa?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000487f50 sp=0xc000487f30 pc=0x7ff6cd9a12ee
runtime.gcBgMarkWorker()
C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc000487fe0 sp=0xc000487f50 pc=0x7ff6cd981e25
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000487fe8 sp=0xc000487fe0 pc=0x7ff6cd9d2081
created by runtime.gcBgMarkStartWorkers in goroutine 1
C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c
goroutine 38 gp=0xc000482700 m=nil [GC worker (idle)]:
runtime.gopark(0x261f3dcbb704?, 0x1?, 0x0?, 0x0?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000493f50 sp=0xc000493f30 pc=0x7ff6cd9a12ee
runtime.gcBgMarkWorker()
C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc000493fe0 sp=0xc000493f50 pc=0x7ff6cd981e25
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000493fe8 sp=0xc000493fe0 pc=0x7ff6cd9d2081
created by runtime.gcBgMarkStartWorkers in goroutine 1
C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c
goroutine 6 gp=0xc000059500 m=nil [GC worker (idle)]:
runtime.gopark(0x261f3dcbb704?, 0x1?, 0x0?, 0x0?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00006ff50 sp=0xc00006ff30 pc=0x7ff6cd9a12ee
runtime.gcBgMarkWorker()
C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc00006ffe0 sp=0xc00006ff50 pc=0x7ff6cd981e25
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00006ffe8 sp=0xc00006ffe0 pc=0x7ff6cd9d2081
created by runtime.gcBgMarkStartWorkers in goroutine 1
C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c
goroutine 7 gp=0xc0006041c0 m=0 mp=0x7ff6cf7ec420 [syscall]:
runtime.notetsleepg(0x7ff6cf872380, 0xffffffffffffffff)
C:/Program Files/Go/src/runtime/lock_sema.go:296 +0x31 fp=0xc000491fa0 sp=0xc000491f68 pc=0x7ff6cd971671
os/signal.signal_recv()
C:/Program Files/Go/src/runtime/sigqueue.go:152 +0x29 fp=0xc000491fc0 sp=0xc000491fa0 pc=0x7ff6cd9cdd69
os/signal.loop()
C:/Program Files/Go/src/os/signal/signal_unix.go:23 +0x13 fp=0xc000491fe0 sp=0xc000491fc0 pc=0x7ff6cdc7e413
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000491fe8 sp=0xc000491fe0 pc=0x7ff6cd9d2081
created by os/signal.Notify.func1.1 in goroutine 1
C:/Program Files/Go/src/os/signal/signal.go:151 +0x1f
goroutine 8 gp=0xc000604380 m=nil [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000495f18 sp=0xc000495ef8 pc=0x7ff6cd9a12ee
runtime.chanrecv(0xc00038c5a0, 0x0, 0x1)
C:/Program Files/Go/src/runtime/chan.go:583 +0x3cd fp=0xc000495f90 sp=0xc000495f18 pc=0x7ff6cd96b84d
runtime.chanrecv1(0x0?, 0x0?)
C:/Program Files/Go/src/runtime/chan.go:442 +0x12 fp=0xc000495fb8 sp=0xc000495f90 pc=0x7ff6cd96b452
github.com/jmorganca/ollama/server.Serve.func2()
C:/Users/jeff/git/ollama/server/routes.go:1023 +0x25 fp=0xc000495fe0 sp=0xc000495fb8 pc=0x7ff6cdf33e05
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000495fe8 sp=0xc000495fe0 pc=0x7ff6cd9d2081
created by github.com/jmorganca/ollama/server.Serve in goroutine 1
C:/Users/jeff/git/ollama/server/routes.go:1022 +0x3f6
goroutine 20 gp=0xc000085a40 m=nil [IO wait]:
runtime.gopark(0x0?, 0xc0006211a0?, 0x50?, 0x12?, 0xc0006211d0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000641890 sp=0xc000641870 pc=0x7ff6cd9a12ee
runtime.netpollblock(0x2a8?, 0xcd968cc6?, 0xf6?)
C:/Program Files/Go/src/runtime/netpoll.go:573 +0xf7 fp=0xc0006418c8 sp=0xc000641890 pc=0x7ff6cd998c37
internal/poll.runtime_pollWait(0x1b97d601de8, 0x72)
C:/Program Files/Go/src/runtime/netpoll.go:345 +0x85 fp=0xc0006418e8 sp=0xc0006418c8 pc=0x7ff6cd9cbb25
internal/poll.(*pollDesc).wait(0xc00050f968?, 0x7ff6cd97b6fd?, 0x0)
C:/Program Files/Go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000641910 sp=0xc0006418e8 pc=0x7ff6cda63bc7
internal/poll.execIO(0xc0006211a0, 0x7ff6cf325970)
C:/Program Files/Go/src/internal/poll/fd_windows.go:175 +0xe6 fp=0xc000641980 sp=0xc000641910 pc=0x7ff6cda650a6
internal/poll.(*FD).Read(0xc000621188, {0xc000649000, 0x1000, 0x1000})
C:/Program Files/Go/src/internal/poll/fd_windows.go:436 +0x2b1 fp=0xc000641a28 sp=0xc000641980 pc=0x7ff6cda65d51
net.(*netFD).Read(0xc000621188, {0xc000649000?, 0xc00050fa98?, 0x7ff6cda640a5?})
C:/Program Files/Go/src/net/fd_posix.go:55 +0x25 fp=0xc000641a70 sp=0xc000641a28 pc=0x7ff6cdae31e5
net.(*conn).Read(0xc0004c4020, {0xc000649000?, 0x0?, 0xc00002a638?})
C:/Program Files/Go/src/net/net.go:179 +0x45 fp=0xc000641ab8 sp=0xc000641a70 pc=0x7ff6cdaf2265
net.(*TCPConn).Read(0xc00002a630?, {0xc000649000?, 0xc000621188?, 0xc00050faf0?})
<autogenerated>:1 +0x25 fp=0xc000641ae8 sp=0xc000641ab8 pc=0x7ff6cdb02285
net/http.(*connReader).Read(0xc00002a630, {0xc000649000, 0x1000, 0x1000})
C:/Program Files/Go/src/net/http/server.go:789 +0x14b fp=0xc000641b38 sp=0xc000641ae8 pc=0x7ff6cdc4ee4b
bufio.(*Reader).fill(0xc000686060)
C:/Program Files/Go/src/bufio/bufio.go:110 +0x103 fp=0xc000641b70 sp=0xc000641b38 pc=0x7ff6cdbd28c3
bufio.(*Reader).Peek(0xc000686060, 0x4)
C:/Program Files/Go/src/bufio/bufio.go:148 +0x53 fp=0xc000641b90 sp=0xc000641b70 pc=0x7ff6cdbd29f3
net/http.(*conn).serve(0xc00016a1b0, {0x7ff6cf3e7568, 0xc0004aa3c0})
C:/Program Files/Go/src/net/http/server.go:2074 +0x749 fp=0xc000641fb8 sp=0xc000641b90 pc=0x7ff6cdc54ba9
net/http.(*Server).Serve.gowrap3()
C:/Program Files/Go/src/net/http/server.go:3285 +0x28 fp=0xc000641fe0 sp=0xc000641fb8 pc=0x7ff6cdc59ea8
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000641fe8 sp=0xc000641fe0 pc=0x7ff6cd9d2081
created by net/http.(*Server).Serve in goroutine 1
C:/Program Files/Go/src/net/http/server.go:3285 +0x4b4
goroutine 21 gp=0xc000085c00 m=nil [IO wait]:
runtime.gopark(0x0?, 0xc000621420?, 0xd0?, 0x14?, 0xc000621450?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000645890 sp=0xc000645870 pc=0x7ff6cd9a12ee
runtime.netpollblock(0x2a4?, 0xcd968cc6?, 0xf6?)
C:/Program Files/Go/src/runtime/netpoll.go:573 +0xf7 fp=0xc0006458c8 sp=0xc000645890 pc=0x7ff6cd998c37
internal/poll.runtime_pollWait(0x1b97d601cf0, 0x72)
C:/Program Files/Go/src/runtime/netpoll.go:345 +0x85 fp=0xc0006458e8 sp=0xc0006458c8 pc=0x7ff6cd9cbb25
internal/poll.(*pollDesc).wait(0xc00050b930?, 0x7ff6cda6397c?, 0x0)
C:/Program Files/Go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000645910 sp=0xc0006458e8 pc=0x7ff6cda63bc7
internal/poll.execIO(0xc000621420, 0x7ff6cf325970)
C:/Program Files/Go/src/internal/poll/fd_windows.go:175 +0xe6 fp=0xc000645980 sp=0xc000645910 pc=0x7ff6cda650a6
internal/poll.(*FD).Read(0xc000621408, {0xc000528000, 0x1000, 0x1000})
C:/Program Files/Go/src/internal/poll/fd_windows.go:436 +0x2b1 fp=0xc000645a28 sp=0xc000645980 pc=0x7ff6cda65d51
net.(*netFD).Read(0xc000621408, {0xc000528000?, 0xc00050ba98?, 0x7ff6cda640a5?})
C:/Program Files/Go/src/net/fd_posix.go:55 +0x25 fp=0xc000645a70 sp=0xc000645a28 pc=0x7ff6cdae31e5
net.(*conn).Read(0xc0004c4028, {0xc000528000?, 0x0?, 0xc00002b6e8?})
C:/Program Files/Go/src/net/net.go:179 +0x45 fp=0xc000645ab8 sp=0xc000645a70 pc=0x7ff6cdaf2265
net.(*TCPConn).Read(0xc00002b6e0?, {0xc000528000?, 0xc000621408?, 0xc00050baf0?})
<autogenerated>:1 +0x25 fp=0xc000645ae8 sp=0xc000645ab8 pc=0x7ff6cdb02285
net/http.(*connReader).Read(0xc00002b6e0, {0xc000528000, 0x1000, 0x1000})
C:/Program Files/Go/src/net/http/server.go:789 +0x14b fp=0xc000645b38 sp=0xc000645ae8 pc=0x7ff6cdc4ee4b
bufio.(*Reader).fill(0xc000686180)
C:/Program Files/Go/src/bufio/bufio.go:110 +0x103 fp=0xc000645b70 sp=0xc000645b38 pc=0x7ff6cdbd28c3
bufio.(*Reader).Peek(0xc000686180, 0x4)
C:/Program Files/Go/src/bufio/bufio.go:148 +0x53 fp=0xc000645b90 sp=0xc000645b70 pc=0x7ff6cdbd29f3
net/http.(*conn).serve(0xc00016a240, {0x7ff6cf3e7568, 0xc0004aa3c0})
C:/Program Files/Go/src/net/http/server.go:2074 +0x749 fp=0xc000645fb8 sp=0xc000645b90 pc=0x7ff6cdc54ba9
net/http.(*Server).Serve.gowrap3()
C:/Program Files/Go/src/net/http/server.go:3285 +0x28 fp=0xc000645fe0 sp=0xc000645fb8 pc=0x7ff6cdc59ea8
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000645fe8 sp=0xc000645fe0 pc=0x7ff6cd9d2081
created by net/http.(*Server).Serve in goroutine 1
C:/Program Files/Go/src/net/http/server.go:3285 +0x4b4
goroutine 39 gp=0xc0005841c0 m=nil [IO wait]:
runtime.gopark(0x0?, 0xc000620ca0?, 0x50?, 0xd?, 0xc000620cd0?)
C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00048fd28 sp=0xc00048fd08 pc=0x7ff6cd9a12ee
runtime.netpollblock(0x2a0?, 0xcd968cc6?, 0xf6?)
C:/Program Files/Go/src/runtime/netpoll.go:573 +0xf7 fp=0xc00048fd60 sp=0xc00048fd28 pc=0x7ff6cd998c37
internal/poll.runtime_pollWait(0x1b97d601bf8, 0x72)
C:/Program Files/Go/src/runtime/netpoll.go:345 +0x85 fp=0xc00048fd80 sp=0xc00048fd60 pc=0x7ff6cd9cbb25
internal/poll.(*pollDesc).wait(0x10?, 0x10?, 0x0)
C:/Program Files/Go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00048fda8 sp=0xc00048fd80 pc=0x7ff6cda63bc7
internal/poll.execIO(0xc000620ca0, 0x7ff6cf325970)
C:/Program Files/Go/src/internal/poll/fd_windows.go:175 +0xe6 fp=0xc00048fe18 sp=0xc00048fda8 pc=0x7ff6cda650a6
internal/poll.(*FD).Read(0xc000620c88, {0xc0000882b1, 0x1, 0x1})
C:/Program Files/Go/src/internal/poll/fd_windows.go:436 +0x2b1 fp=0xc00048fec0 sp=0xc00048fe18 pc=0x7ff6cda65d51
net.(*netFD).Read(0xc000620c88, {0xc0000882b1?, 0xc00048ff48?, 0x7ff6cd9cdab0?})
C:/Program Files/Go/src/net/fd_posix.go:55 +0x25 fp=0xc00048ff08 sp=0xc00048fec0 pc=0x7ff6cdae31e5
net.(*conn).Read(0xc0004c41d0, {0xc0000882b1?, 0xc00048feb8?, 0x7ff6cf8718a0?})
C:/Program Files/Go/src/net/net.go:179 +0x45 fp=0xc00048ff50 sp=0xc00048ff08 pc=0x7ff6cdaf2265
net.(*TCPConn).Read(0x7ff6cdfad8b0?, {0xc0000882b1?, 0xc00003b200?, 0x7ff6cdd4c480?})
<autogenerated>:1 +0x25 fp=0xc00048ff80 sp=0xc00048ff50 pc=0x7ff6cdb02285
net/http.(*connReader).backgroundRead(0xc0000882a0)
C:/Program Files/Go/src/net/http/server.go:681 +0x37 fp=0xc00048ffc8 sp=0xc00048ff80 pc=0x7ff6cdc4e9b7
net/http.(*connReader).startBackgroundRead.gowrap2()
C:/Program Files/Go/src/net/http/server.go:677 +0x25 fp=0xc00048ffe0 sp=0xc00048ffc8 pc=0x7ff6cdc4e8e5
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00048ffe8 sp=0xc00048ffe0 pc=0x7ff6cd9d2081
created by net/http.(*connReader).startBackgroundRead in goroutine 9
C:/Program Files/Go/src/net/http/server.go:677 +0xba
rax 0xfffffffffffffff
rbx 0x1b97c194ae8
rcx 0x1b97c194b90
rdx 0x8
rdi 0x8f881ff900
rsi 0x1b97c194b90
rbp 0x1b97c0cd710
rsp 0x8f881ff2e0
r8 0x3b
r9 0x1
r10 0x7ffaadaf0000
r11 0x8f881ff2f0
r12 0xd6bf94d5e57a42bd
r13 0x0
r14 0x1b97c194b30
r15 0x1b97c0cd7c0
rip 0x7ffa57df99dd
rflags 0x10202
cs 0x33
fs 0x53
gs 0x2b
```
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2573/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2573/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6030
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6030/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6030/comments
|
https://api.github.com/repos/ollama/ollama/issues/6030/events
|
https://github.com/ollama/ollama/issues/6030
| 2,434,071,018
|
I_kwDOJ0Z1Ps6RFPnq
| 6,030
|
How to Move Model Files on an External Hard Drive?
|
{
"login": "lennondong",
"id": 3203702,
"node_id": "MDQ6VXNlcjMyMDM3MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3203702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lennondong",
"html_url": "https://github.com/lennondong",
"followers_url": "https://api.github.com/users/lennondong/followers",
"following_url": "https://api.github.com/users/lennondong/following{/other_user}",
"gists_url": "https://api.github.com/users/lennondong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lennondong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lennondong/subscriptions",
"organizations_url": "https://api.github.com/users/lennondong/orgs",
"repos_url": "https://api.github.com/users/lennondong/repos",
"events_url": "https://api.github.com/users/lennondong/events{/privacy}",
"received_events_url": "https://api.github.com/users/lennondong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-07-28T19:06:31
| 2024-11-06T13:12:43
| 2024-11-06T13:12:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
How to Move Model Files on an External Hard Drive?
Due to insufficient internal storage on my Mac, I would like to store model files on an external hard drive to run them.
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6030/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6030/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7657
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7657/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7657/comments
|
https://api.github.com/repos/ollama/ollama/issues/7657/events
|
https://github.com/ollama/ollama/pull/7657
| 2,656,985,769
|
PR_kwDOJ0Z1Ps6B2PAb
| 7,657
|
fix(mllama): sync backend between batches
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-11-13T22:30:09
| 2024-11-14T17:40:07
| 2024-11-14T17:40:04
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7657",
"html_url": "https://github.com/ollama/ollama/pull/7657",
"diff_url": "https://github.com/ollama/ollama/pull/7657.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7657.patch",
"merged_at": "2024-11-14T17:40:04"
}
|
On multi-GPUs and cross attention specifically, sync the backend between batches to ensure operations are complete before moving on
Resolves #7558
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7657/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1808
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1808/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1808/comments
|
https://api.github.com/repos/ollama/ollama/issues/1808/events
|
https://github.com/ollama/ollama/pull/1808
| 2,067,423,788
|
PR_kwDOJ0Z1Ps5jUPns
| 1,808
|
Add Haystack to Community integrations
|
{
"login": "anakin87",
"id": 44616784,
"node_id": "MDQ6VXNlcjQ0NjE2Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/44616784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anakin87",
"html_url": "https://github.com/anakin87",
"followers_url": "https://api.github.com/users/anakin87/followers",
"following_url": "https://api.github.com/users/anakin87/following{/other_user}",
"gists_url": "https://api.github.com/users/anakin87/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anakin87/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anakin87/subscriptions",
"organizations_url": "https://api.github.com/users/anakin87/orgs",
"repos_url": "https://api.github.com/users/anakin87/repos",
"events_url": "https://api.github.com/users/anakin87/events{/privacy}",
"received_events_url": "https://api.github.com/users/anakin87/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-05T14:15:28
| 2024-08-28T14:03:22
| 2024-02-20T03:02:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1808",
"html_url": "https://github.com/ollama/ollama/pull/1808",
"diff_url": "https://github.com/ollama/ollama/pull/1808.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1808.patch",
"merged_at": null
}
|
Hi, maintainers!
[Haystack](https://github.com/deepset-ai/haystack) is a quite popular open-source LLM orchestration framework.
We recently developed an [integration with Ollama](https://haystack.deepset.ai/integrations/ollama).
This PR is to add Haystack to the Community integrations.
If you agree, we would also like to add one or two simple examples [here](https://github.com/jmorganca/ollama/tree/main/examples) (to be done in other PRs).
Thanks for this great project!
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1808/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/716
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/716/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/716/comments
|
https://api.github.com/repos/ollama/ollama/issues/716/events
|
https://github.com/ollama/ollama/issues/716
| 1,929,804,426
|
I_kwDOJ0Z1Ps5zBnqK
| 716
|
Swagger / OpenAPI documentation / playground
|
{
"login": "netandreus",
"id": 313477,
"node_id": "MDQ6VXNlcjMxMzQ3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/313477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/netandreus",
"html_url": "https://github.com/netandreus",
"followers_url": "https://api.github.com/users/netandreus/followers",
"following_url": "https://api.github.com/users/netandreus/following{/other_user}",
"gists_url": "https://api.github.com/users/netandreus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/netandreus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/netandreus/subscriptions",
"organizations_url": "https://api.github.com/users/netandreus/orgs",
"repos_url": "https://api.github.com/users/netandreus/repos",
"events_url": "https://api.github.com/users/netandreus/events{/privacy}",
"received_events_url": "https://api.github.com/users/netandreus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-10-06T09:46:59
| 2024-01-22T16:55:12
| 2023-12-04T19:36:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Do you plan to generate API documentation with ```github.com/swaggo/swag```?
It will be very usefull.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/716/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/716/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4115
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4115/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4115/comments
|
https://api.github.com/repos/ollama/ollama/issues/4115/events
|
https://github.com/ollama/ollama/issues/4115
| 2,276,813,590
|
I_kwDOJ0Z1Ps6HtWsW
| 4,115
|
Error 403 when trying to call api/chat or api/generate from REST client
|
{
"login": "MaheshAwasare",
"id": 48666285,
"node_id": "MDQ6VXNlcjQ4NjY2Mjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48666285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshAwasare",
"html_url": "https://github.com/MaheshAwasare",
"followers_url": "https://api.github.com/users/MaheshAwasare/followers",
"following_url": "https://api.github.com/users/MaheshAwasare/following{/other_user}",
"gists_url": "https://api.github.com/users/MaheshAwasare/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaheshAwasare/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaheshAwasare/subscriptions",
"organizations_url": "https://api.github.com/users/MaheshAwasare/orgs",
"repos_url": "https://api.github.com/users/MaheshAwasare/repos",
"events_url": "https://api.github.com/users/MaheshAwasare/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaheshAwasare/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 14
| 2024-05-03T03:06:15
| 2024-11-25T23:48:47
| 2024-05-07T07:16:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi, Downloaded latest llama3 model after installing ollama for **Windows** from https://www.ollama.com
I have downloaded llama3 latest model.
```
C:\Windows\System32>ollama list
NAME ID SIZE MODIFIED
llama3:latest a6990ed6be41 4.7 GB 45 hours ago
```
On command prompt it works fine and provides answers.

However using any rest client it gives 403 forbidden error.

I also looked at https://github.com/ollama/ollama/pull/743/commits for reference but I think it is still not solving the issue.
This is model information that it prints out at the beginning.
```
`llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2`
```
Following block contains both 200 request (from command prompt) and 403 from rest client.
```
[GIN] 2024/05/03 - 08:30:49 | 200 | 14.1625275s | 127.0.0.1 | POST "/api/chat"
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":56979,"status":200,"tid":"31556","timestamp":1714705249}
[GIN] 2024/05/03 - 08:31:20 | 403 | 2.0144ms | 127.0.0.1 | POST "/api/chat"
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"13712","timestamp":1714705438}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":57122,"status":200,"tid":"22756","timestamp":1714705438}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"13712","timestamp":1714705438}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":57122,"status":200,"tid":"22756","timestamp":1714705438}
{"function":"log_server_request","level":"INFO","line":2744,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":57122,"status":200,"tid":"22756","timestamp":1714705438}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"13712","timestamp":1714705438}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":57122,"status":200,"tid":"22756","timestamp":1714705438}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":9,"tid":"13712","timestamp":1714705438}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1819,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":12,"slot_id":0,"task_id":9,"tid":"13712","timestamp":1714705438}
{"function":"update_slots","level":"INFO","line":1843,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":9,"tid":"13712","timestamp":1714705438}
{"function":"print_timings","level":"INFO","line":276,"msg":"prompt eval time = 2535.21 ms / 12 tokens ( 211.27 ms per token, 4.73 tokens per second)","n_prompt_tokens_processed":12,"n_tokens_second":4.733339434081937,"slot_id":0,"t_prompt_processing":2535.208,"t_token":211.26733333333334,"task_id":9,"tid":"13712","timestamp":1714705443}
{"function":"print_timings","level":"INFO","line":290,"msg":"generation eval time = 2561.01 ms / 8 runs ( 320.13 ms per token, 3.12 tokens per second)","n_decoded":8,"n_tokens_second":3.123766356333495,"slot_id":0,"t_token":320.126375,"t_token_generation":2561.011,"task_id":9,"tid":"13712","timestamp":1714705443}
{"function":"print_timings","level":"INFO","line":299,"msg":" total time = 5096.22 ms","slot_id":0,"t_prompt_processing":2535.208,"t_token_generation":2561.011,"t_total":5096.219,"task_id":9,"tid":"13712","timestamp":1714705443}
{"function":"update_slots","level":"INFO","line":1651,"msg":"slot released","n_cache_tokens":20,"n_ctx":2048,"n_past":19,"n_system_tokens":0,"slot_id":0,"task_id":9,"tid":"13712","timestamp":1714705443,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2744,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":57122,"status":200,"tid":"22756","timestamp":1714705443}[GIN] 2024/05/03 - 08:34:03 | 200 | 5.1187818s | 127.0.0.1 | POST "/api/chat"
```
Is it some known issue or am I missing something?
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.1.33
|
{
"login": "MaheshAwasare",
"id": 48666285,
"node_id": "MDQ6VXNlcjQ4NjY2Mjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48666285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshAwasare",
"html_url": "https://github.com/MaheshAwasare",
"followers_url": "https://api.github.com/users/MaheshAwasare/followers",
"following_url": "https://api.github.com/users/MaheshAwasare/following{/other_user}",
"gists_url": "https://api.github.com/users/MaheshAwasare/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaheshAwasare/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaheshAwasare/subscriptions",
"organizations_url": "https://api.github.com/users/MaheshAwasare/orgs",
"repos_url": "https://api.github.com/users/MaheshAwasare/repos",
"events_url": "https://api.github.com/users/MaheshAwasare/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaheshAwasare/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4115/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4115/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6719
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6719/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6719/comments
|
https://api.github.com/repos/ollama/ollama/issues/6719/events
|
https://github.com/ollama/ollama/issues/6719
| 2,515,418,688
|
I_kwDOJ0Z1Ps6V7j5A
| 6,719
|
(111) Connection refused
|
{
"login": "SheltonLiu-N",
"id": 65779833,
"node_id": "MDQ6VXNlcjY1Nzc5ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/65779833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SheltonLiu-N",
"html_url": "https://github.com/SheltonLiu-N",
"followers_url": "https://api.github.com/users/SheltonLiu-N/followers",
"following_url": "https://api.github.com/users/SheltonLiu-N/following{/other_user}",
"gists_url": "https://api.github.com/users/SheltonLiu-N/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SheltonLiu-N/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SheltonLiu-N/subscriptions",
"organizations_url": "https://api.github.com/users/SheltonLiu-N/orgs",
"repos_url": "https://api.github.com/users/SheltonLiu-N/repos",
"events_url": "https://api.github.com/users/SheltonLiu-N/events{/privacy}",
"received_events_url": "https://api.github.com/users/SheltonLiu-N/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-09-10T05:14:21
| 2024-09-10T15:22:06
| 2024-09-10T15:22:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I met a wired problem. When I use ollama in a server, I can run `ollama run llama3.1:70b` successfully in cmd, but failed in my python projects
```
process = subprocess.Popen("ollama serve", shell=True)
time.sleep(10)
ollama.pull("llama3.1:70b")
```
which returns:
```
<blockquote id="error">
<p><b>Connection to 127.0.0.1 failed.</b></p>
</blockquote>
<p id="sysmsg">The system returned: <i>(111) Connection refused</i></p>
```
also, I tested in cmd with:
`curl http://127.0.0.1:11434/api/pull`
and it returns:
```<
div id="content">
<p>The following error was encountered while trying to retrieve the URL: <a href="http://127.0.0.1:11434/api/pull">http://127.0.0.1:11434/api/pull</a></p>
<blockquote id="error">
<p><b>Connection to 127.0.0.1 failed.</b></p>
</blockquote>
<p id="sysmsg">The system returned: <i>(111) Connection refused</i></p>
```
I verified that the ollama is listening to `127.0.0.1:11434`
How to solve this problem?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
v0.3.10
|
{
"login": "SheltonLiu-N",
"id": 65779833,
"node_id": "MDQ6VXNlcjY1Nzc5ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/65779833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SheltonLiu-N",
"html_url": "https://github.com/SheltonLiu-N",
"followers_url": "https://api.github.com/users/SheltonLiu-N/followers",
"following_url": "https://api.github.com/users/SheltonLiu-N/following{/other_user}",
"gists_url": "https://api.github.com/users/SheltonLiu-N/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SheltonLiu-N/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SheltonLiu-N/subscriptions",
"organizations_url": "https://api.github.com/users/SheltonLiu-N/orgs",
"repos_url": "https://api.github.com/users/SheltonLiu-N/repos",
"events_url": "https://api.github.com/users/SheltonLiu-N/events{/privacy}",
"received_events_url": "https://api.github.com/users/SheltonLiu-N/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6719/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2826
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2826/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2826/comments
|
https://api.github.com/repos/ollama/ollama/issues/2826/events
|
https://github.com/ollama/ollama/issues/2826
| 2,160,307,546
|
I_kwDOJ0Z1Ps6Aw61a
| 2,826
|
How to enable GPU support on Ubuntu?
|
{
"login": "realcarlos",
"id": 13853,
"node_id": "MDQ6VXNlcjEzODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/13853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/realcarlos",
"html_url": "https://github.com/realcarlos",
"followers_url": "https://api.github.com/users/realcarlos/followers",
"following_url": "https://api.github.com/users/realcarlos/following{/other_user}",
"gists_url": "https://api.github.com/users/realcarlos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/realcarlos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/realcarlos/subscriptions",
"organizations_url": "https://api.github.com/users/realcarlos/orgs",
"repos_url": "https://api.github.com/users/realcarlos/repos",
"events_url": "https://api.github.com/users/realcarlos/events{/privacy}",
"received_events_url": "https://api.github.com/users/realcarlos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-29T04:01:37
| 2024-02-29T08:31:02
| 2024-02-29T08:31:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi , I am using a DGX H800 server , and when I run
`ollama run llama2:70b`
I found that model is loaded into memory , not GPU .
<img width="558" alt="Screenshot 2024-02-29 at 12 03 28" src="https://github.com/ollama/ollama/assets/13853/182fb3d8-f075-4c33-8d42-82cec09b5459">
|
{
"login": "realcarlos",
"id": 13853,
"node_id": "MDQ6VXNlcjEzODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/13853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/realcarlos",
"html_url": "https://github.com/realcarlos",
"followers_url": "https://api.github.com/users/realcarlos/followers",
"following_url": "https://api.github.com/users/realcarlos/following{/other_user}",
"gists_url": "https://api.github.com/users/realcarlos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/realcarlos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/realcarlos/subscriptions",
"organizations_url": "https://api.github.com/users/realcarlos/orgs",
"repos_url": "https://api.github.com/users/realcarlos/repos",
"events_url": "https://api.github.com/users/realcarlos/events{/privacy}",
"received_events_url": "https://api.github.com/users/realcarlos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2826/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7343
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7343/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7343/comments
|
https://api.github.com/repos/ollama/ollama/issues/7343/events
|
https://github.com/ollama/ollama/issues/7343
| 2,611,796,486
|
I_kwDOJ0Z1Ps6brNoG
| 7,343
|
Aya-Expanse 32B & 8B
|
{
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/followers",
"following_url": "https://api.github.com/users/vYLQs6/following{/other_user}",
"gists_url": "https://api.github.com/users/vYLQs6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vYLQs6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vYLQs6/subscriptions",
"organizations_url": "https://api.github.com/users/vYLQs6/orgs",
"repos_url": "https://api.github.com/users/vYLQs6/repos",
"events_url": "https://api.github.com/users/vYLQs6/events{/privacy}",
"received_events_url": "https://api.github.com/users/vYLQs6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-10-24T14:32:57
| 2024-10-27T20:13:25
| 2024-10-25T00:24:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
These models should be very easy to support since people are already uploading ggufs to HF, so it should be already supported by ollama, just need to be ggufed and uploaded
https://huggingface.co/CohereForAI/aya-expanse-32b
https://huggingface.co/CohereForAI/aya-expanse-8b


|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7343/reactions",
"total_count": 5,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7343/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4189
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4189/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4189/comments
|
https://api.github.com/repos/ollama/ollama/issues/4189/events
|
https://github.com/ollama/ollama/pull/4189
| 2,279,887,410
|
PR_kwDOJ0Z1Ps5ulaik
| 4,189
|
Skip scheduling cancelled requests and always reload if unloading
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-06T02:09:53
| 2024-05-06T21:22:25
| 2024-05-06T21:22:25
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4189",
"html_url": "https://github.com/ollama/ollama/pull/4189",
"diff_url": "https://github.com/ollama/ollama/pull/4189.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4189.patch",
"merged_at": "2024-05-06T21:22:25"
}
|
This should fix the issue seen in https://github.com/ollama/ollama/pull/4187. The issue was caused by a new request coming in for the same model _while_ it was being unloaded. I do wonder if there's a better approach here: if we know another request is coming in for a model we're loading, we shouldn't unload it on a context cancel since we'll need it anyways.
This change also discards any incoming requests that have already been cancelled so they aren't scheduled
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4189/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1101
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1101/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1101/comments
|
https://api.github.com/repos/ollama/ollama/issues/1101/events
|
https://github.com/ollama/ollama/issues/1101
| 1,989,522,713
|
I_kwDOJ0Z1Ps52lbUZ
| 1,101
|
[Question]: Use all CPU resource from Docker CPU image
|
{
"login": "LWJerri",
"id": 50290430,
"node_id": "MDQ6VXNlcjUwMjkwNDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/50290430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LWJerri",
"html_url": "https://github.com/LWJerri",
"followers_url": "https://api.github.com/users/LWJerri/followers",
"following_url": "https://api.github.com/users/LWJerri/following{/other_user}",
"gists_url": "https://api.github.com/users/LWJerri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LWJerri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LWJerri/subscriptions",
"organizations_url": "https://api.github.com/users/LWJerri/orgs",
"repos_url": "https://api.github.com/users/LWJerri/repos",
"events_url": "https://api.github.com/users/LWJerri/events{/privacy}",
"received_events_url": "https://api.github.com/users/LWJerri/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-11-12T18:46:05
| 2023-12-20T11:06:49
| 2023-12-11T09:31:47
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi. I have a dedicated server with an **Intel® Core™ i5-13500** processor ([more info here](https://www.hetzner.com/dedicated-rootserver/ex44)). But Ollama uses only ~50% of all power. What do I need to do to use all CPU resources? I'm using Docker to run Ollama, here is my `docker-compose.yaml`:
```yaml
version: "3.7"
services:
api-ollama:
restart: always
image: ollama/ollama:latest
networks:
- caddy
volumes:
- ollama:/root/.ollama
labels:
caddy: api.ollama.main.lwjerri.dev
caddy.reverse_proxy: "{{upstreams 11434}}"
volumes:
ollama:
networks:
caddy:
external: true
```

Thanks for any help <3
|
{
"login": "LWJerri",
"id": 50290430,
"node_id": "MDQ6VXNlcjUwMjkwNDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/50290430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LWJerri",
"html_url": "https://github.com/LWJerri",
"followers_url": "https://api.github.com/users/LWJerri/followers",
"following_url": "https://api.github.com/users/LWJerri/following{/other_user}",
"gists_url": "https://api.github.com/users/LWJerri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LWJerri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LWJerri/subscriptions",
"organizations_url": "https://api.github.com/users/LWJerri/orgs",
"repos_url": "https://api.github.com/users/LWJerri/repos",
"events_url": "https://api.github.com/users/LWJerri/events{/privacy}",
"received_events_url": "https://api.github.com/users/LWJerri/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1101/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6995
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6995/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6995/comments
|
https://api.github.com/repos/ollama/ollama/issues/6995/events
|
https://github.com/ollama/ollama/issues/6995
| 2,552,096,757
|
I_kwDOJ0Z1Ps6YHef1
| 6,995
|
when can we user llama3.2 11b or 90b
|
{
"login": "wangjingyu001",
"id": 65798211,
"node_id": "MDQ6VXNlcjY1Nzk4MjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/65798211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangjingyu001",
"html_url": "https://github.com/wangjingyu001",
"followers_url": "https://api.github.com/users/wangjingyu001/followers",
"following_url": "https://api.github.com/users/wangjingyu001/following{/other_user}",
"gists_url": "https://api.github.com/users/wangjingyu001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangjingyu001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangjingyu001/subscriptions",
"organizations_url": "https://api.github.com/users/wangjingyu001/orgs",
"repos_url": "https://api.github.com/users/wangjingyu001/repos",
"events_url": "https://api.github.com/users/wangjingyu001/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangjingyu001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-09-27T06:19:34
| 2024-09-30T20:54:21
| 2024-09-30T20:54:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6995/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6127
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6127/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6127/comments
|
https://api.github.com/repos/ollama/ollama/issues/6127/events
|
https://github.com/ollama/ollama/issues/6127
| 2,443,516,017
|
I_kwDOJ0Z1Ps6RpRhx
| 6,127
|
llama3.1 always uses tool
|
{
"login": "tomaszbk",
"id": 85851175,
"node_id": "MDQ6VXNlcjg1ODUxMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/85851175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaszbk",
"html_url": "https://github.com/tomaszbk",
"followers_url": "https://api.github.com/users/tomaszbk/followers",
"following_url": "https://api.github.com/users/tomaszbk/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaszbk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaszbk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaszbk/subscriptions",
"organizations_url": "https://api.github.com/users/tomaszbk/orgs",
"repos_url": "https://api.github.com/users/tomaszbk/repos",
"events_url": "https://api.github.com/users/tomaszbk/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaszbk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 16
| 2024-08-01T21:35:21
| 2024-12-31T17:07:27
| 2024-08-02T00:59:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
no matter what I prompt, llama3.1 always replies with a tool call
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.10.58
|
{
"login": "tomaszbk",
"id": 85851175,
"node_id": "MDQ6VXNlcjg1ODUxMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/85851175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaszbk",
"html_url": "https://github.com/tomaszbk",
"followers_url": "https://api.github.com/users/tomaszbk/followers",
"following_url": "https://api.github.com/users/tomaszbk/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaszbk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaszbk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaszbk/subscriptions",
"organizations_url": "https://api.github.com/users/tomaszbk/orgs",
"repos_url": "https://api.github.com/users/tomaszbk/repos",
"events_url": "https://api.github.com/users/tomaszbk/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaszbk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6127/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6127/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2388
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2388/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2388/comments
|
https://api.github.com/repos/ollama/ollama/issues/2388/events
|
https://github.com/ollama/ollama/issues/2388
| 2,123,060,333
|
I_kwDOJ0Z1Ps5-i1Rt
| 2,388
|
Local caching / Custom Registry
|
{
"login": "KaruroChori",
"id": 9377545,
"node_id": "MDQ6VXNlcjkzNzc1NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9377545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaruroChori",
"html_url": "https://github.com/KaruroChori",
"followers_url": "https://api.github.com/users/KaruroChori/followers",
"following_url": "https://api.github.com/users/KaruroChori/following{/other_user}",
"gists_url": "https://api.github.com/users/KaruroChori/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaruroChori/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaruroChori/subscriptions",
"organizations_url": "https://api.github.com/users/KaruroChori/orgs",
"repos_url": "https://api.github.com/users/KaruroChori/repos",
"events_url": "https://api.github.com/users/KaruroChori/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaruroChori/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2024-02-07T13:37:57
| 2025-01-09T07:44:44
| 2024-03-11T19:48:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is there any supported solution to:
- locally host a custom registry?
- at the very least a partial mirror of the registry for those models which have been downloaded once already?
Downloading the same model from the original source on multiple machines within the same network is quite slow and wasteful.
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2388/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2388/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6595
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6595/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6595/comments
|
https://api.github.com/repos/ollama/ollama/issues/6595/events
|
https://github.com/ollama/ollama/issues/6595
| 2,501,337,734
|
I_kwDOJ0Z1Ps6VF2KG
| 6,595
|
4 AMD GPUs with mixed VRAM sizes: layer predictions incorrect leads to runner crash
|
{
"login": "MikeLP",
"id": 368849,
"node_id": "MDQ6VXNlcjM2ODg0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/368849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikeLP",
"html_url": "https://github.com/MikeLP",
"followers_url": "https://api.github.com/users/MikeLP/followers",
"following_url": "https://api.github.com/users/MikeLP/following{/other_user}",
"gists_url": "https://api.github.com/users/MikeLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MikeLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MikeLP/subscriptions",
"organizations_url": "https://api.github.com/users/MikeLP/orgs",
"repos_url": "https://api.github.com/users/MikeLP/repos",
"events_url": "https://api.github.com/users/MikeLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/MikeLP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 41
| 2024-09-02T16:53:13
| 2024-11-13T22:17:45
| 2024-10-23T15:34:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I load a large model that doesn't fit in VRAM, Ollama crashes:
➜ ~ ollama run dbrx:132b-instruct-q8_0
Error: llama runner process has terminated: signal: segmentation fault (core dumped)
This issue does not occur with Ollama 0.3.6.
My hardware:
CPU: AMD Ryzen Threadripper PRO 7965WX 24-Cores
GPU 1: AMD Instinct MI100 [Discrete]
GPU 2 AMD Instinct MI100 [Discrete]
GPU 3: AMD Radeon RX 6900 XT [Discrete]
GPU 4: AMD Radeon VII [Discrete]
VRAM: 96GiB
RAM: 128 GiB
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.7 - 0.3.9
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6595/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6595/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4837
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4837/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4837/comments
|
https://api.github.com/repos/ollama/ollama/issues/4837/events
|
https://github.com/ollama/ollama/issues/4837
| 2,336,227,433
|
I_kwDOJ0Z1Ps6LQABp
| 4,837
|
Stop model without removing
|
{
"login": "alemian95",
"id": 43844752,
"node_id": "MDQ6VXNlcjQzODQ0NzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/43844752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alemian95",
"html_url": "https://github.com/alemian95",
"followers_url": "https://api.github.com/users/alemian95/followers",
"following_url": "https://api.github.com/users/alemian95/following{/other_user}",
"gists_url": "https://api.github.com/users/alemian95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alemian95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alemian95/subscriptions",
"organizations_url": "https://api.github.com/users/alemian95/orgs",
"repos_url": "https://api.github.com/users/alemian95/repos",
"events_url": "https://api.github.com/users/alemian95/events{/privacy}",
"received_events_url": "https://api.github.com/users/alemian95/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-06-05T15:32:09
| 2024-06-05T20:27:06
| 2024-06-05T20:27:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Is it possible to have a command to clear the model from the memory without removing it?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4837/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7479
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7479/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7479/comments
|
https://api.github.com/repos/ollama/ollama/issues/7479/events
|
https://github.com/ollama/ollama/issues/7479
| 2,631,137,709
|
I_kwDOJ0Z1Ps6c0_mt
| 7,479
|
请问,我想让ollama每次调用模型的时候,默认全部加载到GPU,有什么办法吗?
|
{
"login": "fg2501",
"id": 164639270,
"node_id": "U_kgDOCdAyJg",
"avatar_url": "https://avatars.githubusercontent.com/u/164639270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fg2501",
"html_url": "https://github.com/fg2501",
"followers_url": "https://api.github.com/users/fg2501/followers",
"following_url": "https://api.github.com/users/fg2501/following{/other_user}",
"gists_url": "https://api.github.com/users/fg2501/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fg2501/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fg2501/subscriptions",
"organizations_url": "https://api.github.com/users/fg2501/orgs",
"repos_url": "https://api.github.com/users/fg2501/repos",
"events_url": "https://api.github.com/users/fg2501/events{/privacy}",
"received_events_url": "https://api.github.com/users/fg2501/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 14
| 2024-11-03T11:07:45
| 2025-01-12T11:59:28
| 2024-11-06T05:17:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?

每次调用的时候,经常会出现,GPU调用不到百分百,有时候一半CPU,一般GPU,有的时候甚至全部调用CPU,有办法强制只调用GPU吗?
还有,加载的GPU,默认5分钟之后卸载,我能改成10分钟之后再卸载,或者使其一直处于加载状态吗?
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.29
|
{
"login": "fg2501",
"id": 164639270,
"node_id": "U_kgDOCdAyJg",
"avatar_url": "https://avatars.githubusercontent.com/u/164639270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fg2501",
"html_url": "https://github.com/fg2501",
"followers_url": "https://api.github.com/users/fg2501/followers",
"following_url": "https://api.github.com/users/fg2501/following{/other_user}",
"gists_url": "https://api.github.com/users/fg2501/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fg2501/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fg2501/subscriptions",
"organizations_url": "https://api.github.com/users/fg2501/orgs",
"repos_url": "https://api.github.com/users/fg2501/repos",
"events_url": "https://api.github.com/users/fg2501/events{/privacy}",
"received_events_url": "https://api.github.com/users/fg2501/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7479/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4580
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4580/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4580/comments
|
https://api.github.com/repos/ollama/ollama/issues/4580/events
|
https://github.com/ollama/ollama/pull/4580
| 2,311,650,111
|
PR_kwDOJ0Z1Ps5wQ6NW
| 4,580
|
Use flash attention flag for now
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-05-22T23:18:11
| 2024-05-25T14:56:12
| 2024-05-23T04:52:09
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4580",
"html_url": "https://github.com/ollama/ollama/pull/4580",
"diff_url": "https://github.com/ollama/ollama/pull/4580.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4580.patch",
"merged_at": "2024-05-23T04:52:09"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4580/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/142
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/142/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/142/comments
|
https://api.github.com/repos/ollama/ollama/issues/142/events
|
https://github.com/ollama/ollama/issues/142
| 1,814,432,890
|
I_kwDOJ0Z1Ps5sJgx6
| 142
|
Persist conversation histories
|
{
"login": "nathanleclaire",
"id": 1476820,
"node_id": "MDQ6VXNlcjE0NzY4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanleclaire",
"html_url": "https://github.com/nathanleclaire",
"followers_url": "https://api.github.com/users/nathanleclaire/followers",
"following_url": "https://api.github.com/users/nathanleclaire/following{/other_user}",
"gists_url": "https://api.github.com/users/nathanleclaire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathanleclaire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathanleclaire/subscriptions",
"organizations_url": "https://api.github.com/users/nathanleclaire/orgs",
"repos_url": "https://api.github.com/users/nathanleclaire/repos",
"events_url": "https://api.github.com/users/nathanleclaire/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathanleclaire/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2023-07-20T16:56:56
| 2024-01-25T20:14:11
| 2024-01-25T20:14:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be really fun and useful to have a `docker ps`-like persistence of each session
```
SESSION_ID MODEL CREATED AT LAST CHAT SUMMARY
asdfasdfasd llama2 50 minutes ago 1 minute ago Explorations with Waluigi
qwfdsqwefd llama2 30 minutes ago 20 minutes ago Mario is Mean
```
This would also provide a basis for syncing them around. I'd really like to be able to continue a chat from one computer on another!
Then eventually maybe even something like https://github.com/jmorganca/ollama/pull/69 could be expanded to all history or a subset of it.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/142/reactions",
"total_count": 8,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/142/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6932
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6932/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6932/comments
|
https://api.github.com/repos/ollama/ollama/issues/6932/events
|
https://github.com/ollama/ollama/issues/6932
| 2,545,257,411
|
I_kwDOJ0Z1Ps6XtYvD
| 6,932
|
qwen2.5coder /api/generate odd behavior when `suffix` is present but empty string.
|
{
"login": "kirel",
"id": 9124,
"node_id": "MDQ6VXNlcjkxMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kirel",
"html_url": "https://github.com/kirel",
"followers_url": "https://api.github.com/users/kirel/followers",
"following_url": "https://api.github.com/users/kirel/following{/other_user}",
"gists_url": "https://api.github.com/users/kirel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kirel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kirel/subscriptions",
"organizations_url": "https://api.github.com/users/kirel/orgs",
"repos_url": "https://api.github.com/users/kirel/repos",
"events_url": "https://api.github.com/users/kirel/events{/privacy}",
"received_events_url": "https://api.github.com/users/kirel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-09-24T12:24:05
| 2024-09-24T12:24:05
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Please check the difference between
```
echo -e $(curl http://localhost:11434/api/generate -d '{
"model": "qwen2.5-coder:1.5b",
"prompt": "def fib(", "suffix": " "
}' | jq -s 'map(.response) | join("")')
```
and
```
echo -e $(curl http://localhost:11434/api/generate -d '{
"model": "qwen2.5-coder:1.5b",
"prompt": "def fib(", "suffix": ""
}' | jq -s 'map(.response) | join("")')
```
In the second case the model does not act as FIM as I would expect (ands the template suggests). In the first case it does.
Maybe this is intended but I would not have expected it.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.3.11
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6932/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1230
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1230/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1230/comments
|
https://api.github.com/repos/ollama/ollama/issues/1230/events
|
https://github.com/ollama/ollama/issues/1230
| 2,005,288,754
|
I_kwDOJ0Z1Ps53hkcy
| 1,230
|
Incorrect Version Displayed with Command: ollama --version
|
{
"login": "Samk13",
"id": 36583694,
"node_id": "MDQ6VXNlcjM2NTgzNjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/36583694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Samk13",
"html_url": "https://github.com/Samk13",
"followers_url": "https://api.github.com/users/Samk13/followers",
"following_url": "https://api.github.com/users/Samk13/following{/other_user}",
"gists_url": "https://api.github.com/users/Samk13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Samk13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samk13/subscriptions",
"organizations_url": "https://api.github.com/users/Samk13/orgs",
"repos_url": "https://api.github.com/users/Samk13/repos",
"events_url": "https://api.github.com/users/Samk13/events{/privacy}",
"received_events_url": "https://api.github.com/users/Samk13/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2023-11-21T23:03:06
| 2023-11-21T23:48:21
| 2023-11-21T23:48:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When running the version command, the output incorrectly displays ollama version 0.0.0 instead of the expected version:
```bash
ollama --version
ollama version 0.0.0
```
expected behavior:
```bash
ollama --version
ollama version 0.1.11
```
Current Ollama version: `V0.1.11`
Running on `WSL2 Ubuntu 22.04.3 LTS`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1230/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6664
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6664/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6664/comments
|
https://api.github.com/repos/ollama/ollama/issues/6664/events
|
https://github.com/ollama/ollama/issues/6664
| 2,508,984,102
|
I_kwDOJ0Z1Ps6VjA8m
| 6,664
|
Reflection 70B model request
|
{
"login": "gileneusz",
"id": 34601970,
"node_id": "MDQ6VXNlcjM0NjAxOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gileneusz",
"html_url": "https://github.com/gileneusz",
"followers_url": "https://api.github.com/users/gileneusz/followers",
"following_url": "https://api.github.com/users/gileneusz/following{/other_user}",
"gists_url": "https://api.github.com/users/gileneusz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gileneusz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gileneusz/subscriptions",
"organizations_url": "https://api.github.com/users/gileneusz/orgs",
"repos_url": "https://api.github.com/users/gileneusz/repos",
"events_url": "https://api.github.com/users/gileneusz/events{/privacy}",
"received_events_url": "https://api.github.com/users/gileneusz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-09-05T23:05:16
| 2024-09-10T22:57:01
| 2024-09-06T06:30:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
would be nice to add it
<img width="593" alt="Screenshot 2024-09-06 at 01 04 54" src="https://github.com/user-attachments/assets/4d83c4aa-9d6d-43c0-8e25-693aa3c8cee6">
https://huggingface.co/mattshumer/Reflection-70B
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6664/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6664/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4670
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4670/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4670/comments
|
https://api.github.com/repos/ollama/ollama/issues/4670/events
|
https://github.com/ollama/ollama/issues/4670
| 2,319,818,939
|
I_kwDOJ0Z1Ps6KRaC7
| 4,670
|
llama3 8b BF16 error
|
{
"login": "ccbadd",
"id": 17816991,
"node_id": "MDQ6VXNlcjE3ODE2OTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/17816991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ccbadd",
"html_url": "https://github.com/ccbadd",
"followers_url": "https://api.github.com/users/ccbadd/followers",
"following_url": "https://api.github.com/users/ccbadd/following{/other_user}",
"gists_url": "https://api.github.com/users/ccbadd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ccbadd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ccbadd/subscriptions",
"organizations_url": "https://api.github.com/users/ccbadd/orgs",
"repos_url": "https://api.github.com/users/ccbadd/repos",
"events_url": "https://api.github.com/users/ccbadd/events{/privacy}",
"received_events_url": "https://api.github.com/users/ccbadd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6947643302,
"node_id": "LA_kwDOJ0Z1Ps8AAAABnhyfpg",
"url": "https://api.github.com/repos/ollama/ollama/labels/create",
"name": "create",
"color": "b60205",
"default": false,
"description": "Issues relating to ollama create"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-05-27T21:54:04
| 2024-08-13T18:09:42
| 2024-08-09T23:49:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I downloaded the model from huggingface llama3 8b bf16 and when I run:
ollama create llama3bf16 -f Modfile
I get this output:
transferring model data
Error: invalid file magic
This is on a Win 11 PC running ollama 0.1.38 with an AMD W6800 GPU. Is the bf16 model not supported?
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.38
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4670/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4670/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2723
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2723/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2723/comments
|
https://api.github.com/repos/ollama/ollama/issues/2723/events
|
https://github.com/ollama/ollama/issues/2723
| 2,152,207,503
|
I_kwDOJ0Z1Ps6ASBSP
| 2,723
|
Updating max_tokens for LLM by OpenAI library doesn't work
|
{
"login": "shashade2012",
"id": 22316457,
"node_id": "MDQ6VXNlcjIyMzE2NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/22316457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashade2012",
"html_url": "https://github.com/shashade2012",
"followers_url": "https://api.github.com/users/shashade2012/followers",
"following_url": "https://api.github.com/users/shashade2012/following{/other_user}",
"gists_url": "https://api.github.com/users/shashade2012/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashade2012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashade2012/subscriptions",
"organizations_url": "https://api.github.com/users/shashade2012/orgs",
"repos_url": "https://api.github.com/users/shashade2012/repos",
"events_url": "https://api.github.com/users/shashade2012/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashade2012/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-02-24T08:20:27
| 2024-03-13T03:35:25
| 2024-03-13T03:35:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I need to adjust the default token limit for my Large Language Model (LLM). Currently, I’m using Ollama with the Mistral model and have created two clients—one using the Ollama Python library and the other using the OpenAI library. Specifically, I want to increase the default maximum token limit to handle longer prompts. When I attempted to update the options parameter ‘num_ctx’ in the Ollama Python Library, it worked successfully.”
`response = client.chat(model=MODEL, options = {"num_ctx": 2048},
messages=messages,
stream = False,)`
But it didn't work when try OpenAI library to update max_tokens.
`response = client.chat.completions.create(model=MODEL,
messages=messages, max_tokens = 2048,
stream = False,)`
But according to the following reference, it seems max_tokens is supported.
https://github.com/ollama/ollama/blob/main/docs/openai.md#supported-request-fields
Please help check if really support updating max_tokens for OpenAI library. Or is there any other way to update tokens limits for OpenAI?
Thanks.
BRs
Bruce
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2723/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3294
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3294/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3294/comments
|
https://api.github.com/repos/ollama/ollama/issues/3294/events
|
https://github.com/ollama/ollama/issues/3294
| 2,202,827,971
|
I_kwDOJ0Z1Ps6DTHzD
| 3,294
|
System ram won't free up when using cuda.
|
{
"login": "stevenhobs",
"id": 47906512,
"node_id": "MDQ6VXNlcjQ3OTA2NTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/47906512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevenhobs",
"html_url": "https://github.com/stevenhobs",
"followers_url": "https://api.github.com/users/stevenhobs/followers",
"following_url": "https://api.github.com/users/stevenhobs/following{/other_user}",
"gists_url": "https://api.github.com/users/stevenhobs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevenhobs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevenhobs/subscriptions",
"organizations_url": "https://api.github.com/users/stevenhobs/orgs",
"repos_url": "https://api.github.com/users/stevenhobs/repos",
"events_url": "https://api.github.com/users/stevenhobs/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevenhobs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-03-22T16:03:23
| 2024-06-01T21:53:17
| 2024-06-01T21:52:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I run starcoder2:15B. The model occupies 9.1GB of GPU memory. And the ollama serve process should occupy about 1GB of OS memory. But I check the real situation,seem the model cached in the system ram and won't be freed up until I end the ollama process. Is this typical?


### What did you expect to see?
When loading LLM by Cuda, the model cached in the os ram shoud be released.
Or Windows Task Manager should be normal.
### Steps to reproduce
In the past time. The process was normal.
### Are there any recent changes that introduced the issue?
I only updated the windows os patch before this happening.
And updated the ollama version to 1.29.
### OS
Windows
### Architecture
amd64
### Platform
_No response_
### Ollama version
_No response_
### GPU
Nvidia
### GPU info
GPU Nvidia 3060m. This is ok!
### CPU
Intel
### Other software
None
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3294/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1186
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1186/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1186/comments
|
https://api.github.com/repos/ollama/ollama/issues/1186/events
|
https://github.com/ollama/ollama/pull/1186
| 2,000,072,791
|
PR_kwDOJ0Z1Ps5fzSP3
| 1,186
|
fix cross device rename
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-11-17T23:22:38
| 2023-11-18T05:54:54
| 2023-11-18T05:54:54
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1186",
"html_url": "https://github.com/ollama/ollama/pull/1186",
"diff_url": "https://github.com/ollama/ollama/pull/1186.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1186.patch",
"merged_at": "2023-11-18T05:54:54"
}
|
`os.Rename` is only intended for files on the same filesystem. Instead of messing around with that, store the temporary file in the blobs directory
resolves #1181
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1186/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5302
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5302/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5302/comments
|
https://api.github.com/repos/ollama/ollama/issues/5302/events
|
https://github.com/ollama/ollama/pull/5302
| 2,375,482,955
|
PR_kwDOJ0Z1Ps5zpH_M
| 5,302
|
Submit Chinese Document
|
{
"login": "aibabelx",
"id": 16663208,
"node_id": "MDQ6VXNlcjE2NjYzMjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/16663208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aibabelx",
"html_url": "https://github.com/aibabelx",
"followers_url": "https://api.github.com/users/aibabelx/followers",
"following_url": "https://api.github.com/users/aibabelx/following{/other_user}",
"gists_url": "https://api.github.com/users/aibabelx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aibabelx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aibabelx/subscriptions",
"organizations_url": "https://api.github.com/users/aibabelx/orgs",
"repos_url": "https://api.github.com/users/aibabelx/repos",
"events_url": "https://api.github.com/users/aibabelx/events{/privacy}",
"received_events_url": "https://api.github.com/users/aibabelx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-06-26T14:12:07
| 2024-06-27T02:27:49
| 2024-06-27T02:27:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5302",
"html_url": "https://github.com/ollama/ollama/pull/5302",
"diff_url": "https://github.com/ollama/ollama/pull/5302.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5302.patch",
"merged_at": null
}
|
Add Chinese Document
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5302/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5885
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5885/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5885/comments
|
https://api.github.com/repos/ollama/ollama/issues/5885/events
|
https://github.com/ollama/ollama/issues/5885
| 2,425,834,653
|
I_kwDOJ0Z1Ps6Ql0yd
| 5,885
|
llama3.1 template needs to be updated to include tools
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-07-23T18:22:12
| 2024-10-29T08:54:23
| 2024-07-24T16:35:35
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The current version of llama3.1 doesn't support tool use:
```
$ curl -s http://aitoolkit:11434/v1/chat/completions -d '{"model": "llama3.1","tools":[{"type":"function","function": {}}], "messages": [{"role":"user","content":"weather in zurich"}], "stream": false}' | jq
{
"error": {
"message": "llama3.1 does not support tools",
"type": "api_error",
"param": null,
"code": null
}
}
```
I was able to make it work by using the template from llama3-groq-tool-use:
```
$ curl -s http://aitoolkit:11434/v1/chat/completions -d '{"model": "llama3.1-tool","tools":[{"type":"function","function": {}}], "messages": [{"role":"user","content":"weather in zurich"}], "stream": false}' | jq
{
"id": "chatcmpl-43",
"object": "chat.completion",
"created": 1721758820,
"model": "llama3.1-tool",
"system_fingerprint": "fp_ollama",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "call_bftkcsk3",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{}"
}
}
]
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 139,
"completion_tokens": 20,
"total_tokens": 159
}
}
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.8
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5885/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/5885/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4901
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4901/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4901/comments
|
https://api.github.com/repos/ollama/ollama/issues/4901/events
|
https://github.com/ollama/ollama/issues/4901
| 2,339,970,630
|
I_kwDOJ0Z1Ps6LeR5G
| 4,901
|
Error: pull model manifest: ssh: no key found
|
{
"login": "674316",
"id": 24378150,
"node_id": "MDQ6VXNlcjI0Mzc4MTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/24378150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/674316",
"html_url": "https://github.com/674316",
"followers_url": "https://api.github.com/users/674316/followers",
"following_url": "https://api.github.com/users/674316/following{/other_user}",
"gists_url": "https://api.github.com/users/674316/gists{/gist_id}",
"starred_url": "https://api.github.com/users/674316/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/674316/subscriptions",
"organizations_url": "https://api.github.com/users/674316/orgs",
"repos_url": "https://api.github.com/users/674316/repos",
"events_url": "https://api.github.com/users/674316/events{/privacy}",
"received_events_url": "https://api.github.com/users/674316/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-06-07T09:10:55
| 2024-10-20T17:25:45
| 2024-07-11T02:37:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
ollama pull vicuna
>>
pulling manifest
Error: pull model manifest: ssh: no key found
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.1.41
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4901/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7173
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7173/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7173/comments
|
https://api.github.com/repos/ollama/ollama/issues/7173/events
|
https://github.com/ollama/ollama/issues/7173
| 2,581,895,770
|
I_kwDOJ0Z1Ps6Z5Jpa
| 7,173
|
Model Response to Harmful Queries Not Properly Flagged – Safety Issue
|
{
"login": "diazr04",
"id": 160553481,
"node_id": "U_kgDOCZHaCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/160553481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diazr04",
"html_url": "https://github.com/diazr04",
"followers_url": "https://api.github.com/users/diazr04/followers",
"following_url": "https://api.github.com/users/diazr04/following{/other_user}",
"gists_url": "https://api.github.com/users/diazr04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/diazr04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diazr04/subscriptions",
"organizations_url": "https://api.github.com/users/diazr04/orgs",
"repos_url": "https://api.github.com/users/diazr04/repos",
"events_url": "https://api.github.com/users/diazr04/events{/privacy}",
"received_events_url": "https://api.github.com/users/diazr04/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-10-11T17:47:31
| 2024-10-11T23:05:42
| 2024-10-11T23:05:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, I was testing the system’s response to potentially harmful queries to see how well it handles safety protocols. Unfortunately, I noticed that the model provided detailed information on a topic that should likely have been flagged or blocked for safety reasons. I’m reporting this to help improve the model’s filters and safeguards, as it’s important for the system to handle such cases appropriately.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7173/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5726
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5726/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5726/comments
|
https://api.github.com/repos/ollama/ollama/issues/5726/events
|
https://github.com/ollama/ollama/pull/5726
| 2,411,754,820
|
PR_kwDOJ0Z1Ps51i-hY
| 5,726
|
fix unmarshal type errors
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-16T18:11:29
| 2024-07-16T19:12:11
| 2024-07-16T19:12:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5726",
"html_url": "https://github.com/ollama/ollama/pull/5726",
"diff_url": "https://github.com/ollama/ollama/pull/5726.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5726.patch",
"merged_at": "2024-07-16T19:12:10"
}
|
skip unmarshalable types when parsing tool call responses
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5726/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5424
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5424/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5424/comments
|
https://api.github.com/repos/ollama/ollama/issues/5424/events
|
https://github.com/ollama/ollama/issues/5424
| 2,384,935,750
|
I_kwDOJ0Z1Ps6OJztG
| 5,424
|
Supports voice recognition and text-to-speech capabilities, with customizable extension abilities
|
{
"login": "skytodmoon",
"id": 11644650,
"node_id": "MDQ6VXNlcjExNjQ0NjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11644650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skytodmoon",
"html_url": "https://github.com/skytodmoon",
"followers_url": "https://api.github.com/users/skytodmoon/followers",
"following_url": "https://api.github.com/users/skytodmoon/following{/other_user}",
"gists_url": "https://api.github.com/users/skytodmoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skytodmoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skytodmoon/subscriptions",
"organizations_url": "https://api.github.com/users/skytodmoon/orgs",
"repos_url": "https://api.github.com/users/skytodmoon/repos",
"events_url": "https://api.github.com/users/skytodmoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/skytodmoon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-07-02T00:01:13
| 2025-01-21T03:59:53
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Feature Request: Support for Voice Recognition and Text-to-Speech with Custom Extension Capabilities
I would like to propose the addition of voice recognition and text-to-speech functionalities to the project. These features would greatly enhance the user experience by allowing for hands-free interaction and accessibility.
Additionally, I suggest implementing a customizable extension framework that would enable developers to integrate their own voice commands or speech synthesis options, thereby expanding the project's versatility and adaptability to various use cases.
Thank you for considering this enhancement to the project. I believe these features would be a valuable addition and open up new possibilities for users and developers alike.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5424/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5424/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5969
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5969/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5969/comments
|
https://api.github.com/repos/ollama/ollama/issues/5969/events
|
https://github.com/ollama/ollama/issues/5969
| 2,431,330,333
|
I_kwDOJ0Z1Ps6Q6ygd
| 5,969
|
llama3.1 8B losses context
|
{
"login": "Damien2s",
"id": 89670386,
"node_id": "MDQ6VXNlcjg5NjcwMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/89670386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Damien2s",
"html_url": "https://github.com/Damien2s",
"followers_url": "https://api.github.com/users/Damien2s/followers",
"following_url": "https://api.github.com/users/Damien2s/following{/other_user}",
"gists_url": "https://api.github.com/users/Damien2s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Damien2s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Damien2s/subscriptions",
"organizations_url": "https://api.github.com/users/Damien2s/orgs",
"repos_url": "https://api.github.com/users/Damien2s/repos",
"events_url": "https://api.github.com/users/Damien2s/events{/privacy}",
"received_events_url": "https://api.github.com/users/Damien2s/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-07-26T03:19:58
| 2024-08-12T02:27:07
| 2024-08-12T02:00:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
llama3.1 doesn't seems to remember the context.
Here is an example:
> Me:
> Hello can you tell me how much is 56*9?
>
> llama3.1:
> That's a simple one!
> 56 * 9 = 504
> Is that correct?
>
> Me:
> can you divide this by 4?
>
> llama3.1
> However, I don't see any specific numbers or values mentioned in the text that can be divided by 4.
>
> If you're looking for a way to break up the content into smaller chunks, I could try suggesting some possible ways to reorganize it. For example:
>
> 1. **Divide into sections**: Break down the content into distinct sections or topics, such as "Core Principles and Capabilities", "Advanced Features", "Technical Details", etc.
> 2. **Create a summary**: Extract the main points or key information from the text and create a concise summary.
> 3. **Organize by feature**: Group related features or capabilities together, such as "Content Generation", "Knowledge Base", "Reasoning Capabilities", etc.
>
> Please let me know if any of these options sound helpful!
On the other hand, when I use the same prompt with llama3 it work perfectly.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.0
|
{
"login": "Damien2s",
"id": 89670386,
"node_id": "MDQ6VXNlcjg5NjcwMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/89670386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Damien2s",
"html_url": "https://github.com/Damien2s",
"followers_url": "https://api.github.com/users/Damien2s/followers",
"following_url": "https://api.github.com/users/Damien2s/following{/other_user}",
"gists_url": "https://api.github.com/users/Damien2s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Damien2s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Damien2s/subscriptions",
"organizations_url": "https://api.github.com/users/Damien2s/orgs",
"repos_url": "https://api.github.com/users/Damien2s/repos",
"events_url": "https://api.github.com/users/Damien2s/events{/privacy}",
"received_events_url": "https://api.github.com/users/Damien2s/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5969/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1072
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1072/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1072/comments
|
https://api.github.com/repos/ollama/ollama/issues/1072/events
|
https://github.com/ollama/ollama/issues/1072
| 1,987,255,223
|
I_kwDOJ0Z1Ps52cxu3
| 1,072
|
How to host the Models Registry in a private network instead of registry.ollama.ai?
|
{
"login": "marcellodesales",
"id": 131457,
"node_id": "MDQ6VXNlcjEzMTQ1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/131457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcellodesales",
"html_url": "https://github.com/marcellodesales",
"followers_url": "https://api.github.com/users/marcellodesales/followers",
"following_url": "https://api.github.com/users/marcellodesales/following{/other_user}",
"gists_url": "https://api.github.com/users/marcellodesales/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcellodesales/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcellodesales/subscriptions",
"organizations_url": "https://api.github.com/users/marcellodesales/orgs",
"repos_url": "https://api.github.com/users/marcellodesales/repos",
"events_url": "https://api.github.com/users/marcellodesales/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcellodesales/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
open
| false
| null |
[] | null | 0
| 2023-11-10T09:34:23
| 2024-03-11T18:47:36
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# Problem
* Our private network has an Egress firewall rule not allowing connections to any `.ai` endpoints
* Just discovered it at 1:30am
* How a user can override the Models Registry server with their own?
* DO we have instructions to on how to bootstrap a Models Registry server?
* Is it even intended to be used that way?
I have to go through political and legal "debates" to get this enabled... :(
```html
<body><div class="message-container">
<div class="logo"></div>
<h1>FortiGuard Intrusion Prevention - Access Blocked</h1>
<h3>Web Page Blocked</h3>
<p>You have tried to access a web page that is in violation of your Internet usage policy.</p>
<table><tbody>
<tr>
<td>Category</td>
<td>Web Hosting</td>
</tr>
<tr>
<td>URL</td>
<td>https://registry.ollama.ai/u003c/tdu003en</td>
</tr>
</tbody></table>
<p>To have the rating of this web page re-evaluated <a href="https://globalurl.fortinet.net/rate/submit.php?id=1D190B43105B510A6200481D164D6675&cat=38&loc=https://152%2e195%2e19%2e97%2f&ver=9">please click here</a>.</p>
<p></p>
```
# Egress
<details><summary>curl -s -i --no-buffer http://localhost:11434/api/pull -d '{"name": "llama2"}'</summary>
```json
{"error":"pull model manifest: 403: u003c!DOCTYPE htmlu003enu003chtml lang="en"u003en u003cheadu003en u003cmeta charset="UTF-8"u003en u003cmeta http-equiv="X-UA-Compatible" content="IE=8; IE=EDGE"u003en u003cmeta name="viewport" content="width=device-width, initial-scale=1"u003en u003cstyle type="text/css"u003en body {n height: 100%;n font-family: Helvetica, Arial, sans-serif;n color: #6a6a6a;n margin: 0;n display: flex;n align-items: center;n justify-content: center;n }n input[type=date], input[type=email], input[type=number], input[type=password], input[type=search], input[type=tel], input[type=text], input[type=time], input[type=url], select, textarea {n color: #262626;n vertical-align: baseline;n margin: .2em;n border-style: solid;n border-width: 1px;n border-color: #a9a9a9;n background-color: #fff;n box-sizing: border-box;n padding: 2px .5em;n appearance: none;n border-radius: 0;n }n input:focus {n border-color: #646464;n box-shadow: 0 0 1px 0 #a2a2a2;n outline: 0;n }n button {n padding: .5em 1em;n border: 1px solid;n border-radius: 3px;n min-width: 6em;n font-weight: 400;n font-size: .8em;n cursor: pointer;n }n button.primary {n color: #fff;n background-color: rgb(47, 113, 178);n border-color: rgb(34, 103, 173);n }n .message-container {n height: 500px;n width: 600px;n padding: 0;n margin: 10px;n }n .logo {n background: url(https://globalurl.fortinet.net:8015/XX/YY/ZZ/CI/MGPGHGPGPFGHDDPFGGHGFHBGCHEGPFBGAHAH) no-repeat left center;n height: 267px;n object-fit: contain;n }n table {n background-color: #fff;n border-spacing: 0;n margin: 1em;n }n table u003e tbody u003e tr u003e td:first-of-type:not([colspan]) {n white-space: nowrap;n color: rgba(0,0,0,.5);n }n table u003e tbody u003e tr u003e td:first-of-type {n vertical-align: top;n }n table u003e tbody u003e tr u003e td {n padding: .3em .3em;n }n .field {n display: table-row;n }n .field u003e :first-child {n display: table-cell;n width: 20%;n }n .field.single u003e :first-child {n display: inline;n }n .field u003e :not(:first-child) {n width: auto;n max-width: 100%;n display: inline-flex;n align-items: baseline;n virtical-align: top;n box-sizing: border-box;n margin: .3em;n }n .field u003e :not(:first-child) u003e input {n width: 230px;n }n .form-footer {n display: inline-flex;n justify-content: flex-start;n }n .form-footer u003e README.md pull-model.sh {n margin: 1em;n }n .text-scrollable {n overflow: auto;n height: 150px;n border: 1px solid rgb(200, 200, 200);n padding: 5px;n font-size: 1em;n }n .text-centered {n text-align: center;n }n .text-container {n margin: 1em 1.5em;n }n .flex-container {n display: flex;n }n .flex-container.column {n flex-direction: column;n }n u003c/styleu003en u003ctitleu003eThe URL you requested has been blockedu003c/titleu003en u003c/headu003en u003cbodyu003eu003cdiv class="message-container"u003en u003cdiv class="logo"u003eu003c/divu003en u003ch1u003eFortiGuard Intrusion Prevention - Access Blockedu003c/h1u003en u003ch3u003eWeb Page Blockedu003c/h3u003en u003cpu003eThe page you have requested has been blocked because the URL is banned.u003c/pu003en u003ctableu003eu003ctbodyu003en u003ctru003en u003ctdu003eURLu003c/tdu003en u003ctdu003ehttps://registry.ollama.ai/u003c/tdu003en u003c/tru003en u003ctru003en u003ctdu003eDescriptionu003c/tdu003en u003ctdu003eu003c/tdu003en u003c/tru003en u003ctru003en u003ctdu003eURL Sourceu003c/tdu003en u003ctdu003eLocal URLfilter Blocku003c/tdu003en u003c/tru003en u003c/tbodyu003eu003c/tableu003en u003cpu003eu003c/pu003enu003c/divu003eu003c/bodyu003enu003c/htmlu003enrn"}
```
</details>
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1072/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1072/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7009
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7009/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7009/comments
|
https://api.github.com/repos/ollama/ollama/issues/7009/events
|
https://github.com/ollama/ollama/pull/7009
| 2,553,784,536
|
PR_kwDOJ0Z1Ps5898_B
| 7,009
|
llama: add compiler tags for cpu features
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-09-27T21:48:58
| 2024-10-08T16:15:04
| 2024-10-08T15:53:59
|
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7009",
"html_url": "https://github.com/ollama/ollama/pull/7009",
"diff_url": "https://github.com/ollama/ollama/pull/7009.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7009.patch",
"merged_at": null
}
|
Replaced by #7137
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7009/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4053
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4053/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4053/comments
|
https://api.github.com/repos/ollama/ollama/issues/4053/events
|
https://github.com/ollama/ollama/issues/4053
| 2,271,517,232
|
I_kwDOJ0Z1Ps6HZJow
| 4,053
|
The server-side output gets mixed with the responses.
|
{
"login": "JialeLiLab",
"id": 151911704,
"node_id": "U_kgDOCQ39GA",
"avatar_url": "https://avatars.githubusercontent.com/u/151911704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JialeLiLab",
"html_url": "https://github.com/JialeLiLab",
"followers_url": "https://api.github.com/users/JialeLiLab/followers",
"following_url": "https://api.github.com/users/JialeLiLab/following{/other_user}",
"gists_url": "https://api.github.com/users/JialeLiLab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JialeLiLab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JialeLiLab/subscriptions",
"organizations_url": "https://api.github.com/users/JialeLiLab/orgs",
"repos_url": "https://api.github.com/users/JialeLiLab/repos",
"events_url": "https://api.github.com/users/JialeLiLab/events{/privacy}",
"received_events_url": "https://api.github.com/users/JialeLiLab/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-30T13:43:09
| 2024-04-30T14:48:12
| 2024-04-30T14:48:12
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi all,
I'm running ollama that mixes its server logs with my outputs directly in the terminal, making it hard to interact with. Does anyone else experience this? Any advice on how to separate these so I can just see my inputs and outputs without the clutter of continuous server logs?
Thanks for any suggestions!
For example:
(base) root@gpumall-ins-542069835358213:/gm-data# ./ollama run llama3
[GIN] 2024/04/30 - 21:35:49 | 200 | 147.959µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/04/30 - 21:35:49 | 200 | 4.389429ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/04/30 - 21:35:49 | 200 | 761.46µs | 127.0.0.1 | POST "/api/show"
⠋ time=2024-04-30T21:35:52.303+08:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-30T21:35:52.303+08:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-30T21:35:52.306+08:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2135940480/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89]"
time=2024-04-30T21:35:52.309+08:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-30T21:35:52.309+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
⠹ time=2024-04-30T21:35:52.465+08:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-04-30T21:35:52.535+08:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-30T21:35:52.536+08:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
⠸ time=2024-04-30T21:35:52.540+08:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2135940480/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89]"
time=2024-04-30T21:35:52.542+08:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-30T21:35:52.542+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
⠼ time=2024-04-30T21:35:52.691+08:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
⠴ time=2024-04-30T21:35:52.746+08:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="5033.0 MiB" used="5033.0 MiB" available="23996.7 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="677.5 MiB"
time=2024-04-30T21:35:52.746+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-30T21:35:52.747+08:00 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama2135940480/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 38085"
time=2024-04-30T21:35:52.747+08:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"23059827412992","timestamp":1714484152}
{"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"23059827412992","timestamp":1714484152}
{"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":32,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"23059827412992","timestamp":1714484152,"total_threads":64}
⠦ llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
⠧ llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
⠸ llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
⠴ llm_load_tensors: ggml ctx size = 0.22 MiB
⠇ llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 281.81 MiB
llm_load_tensors: CUDA0 buffer size = 4155.99 MiB
⠦ .
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB
⠧ llama_new_context_with_model: CUDA0 compute buffer size = 258.50 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"23059827412992","timestamp":1714484154}
{"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"23059827412992","timestamp":1714484154}
{"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"23059827412992","timestamp":1714484154}
{"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses","tid":"23059827412992","timestamp":1714484154}
{"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"63","port":"38085","tid":"23059827412992","timestamp":1714484154}
{"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"23059827412992","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"23059827412992","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":38750,"status":200,"tid":"23059020898304","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"23059827412992","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"23059827412992","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":38762,"status":200,"tid":"23059018797056","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"23059827412992","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33876,"status":200,"tid":"23058758840320","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33892,"status":200,"tid":"23058943975424","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"23059827412992","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33908,"status":200,"tid":"23058760941568","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"23059827412992","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33924,"status":200,"tid":"23058946076672","timestamp":1714484154}
⠇ {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"23059827412992","timestamp":1714484155}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33960,"status":200,"tid":"23058756739072","timestamp":1714484155}
[GIN] 2024/04/30 - 21:35:55 | 200 | 5.818520362s | 127.0.0.1 | POST "/api/chat"
>>> how are you?
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"23059827412992","timestamp":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"23059827412992","timestamp":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":9,"tid":"23059827412992","timestamp":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161}
⠙ {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":14,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161}
{"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161}
I'm just an AI, so I don't have feelings like humans do. But I'm functioning properly and ready to help answer
your questions or chat with you about a topic of your choice! How can I assist you today?{"function":"print_timings","level":"INFO","line":269,"msg":"prompt eval time = 91.42 ms / 14 tokens ( 6.53 ms per token, 153.14 tokens per second)","n_prompt_tokens_processed":14,"n_tokens_second":153.1443823358894,"slot_id":0,"t_prompt_processing":91.417,"t_token":6.5297857142857145,"task_id":10,"tid":"23059827412992","timestamp":1714484162}
{"function":"print_timings","level":"INFO","line":283,"msg":"generation eval time = 535.35 ms / 47 runs ( 11.39 ms per token, 87.79 tokens per second)","n_decoded":47,"n_tokens_second":87.79319658764655,"slot_id":0,"t_token":11.39040425531915,"t_token_generation":535.349,"task_id":10,"tid":"23059827412992","timestamp":1714484162}
{"function":"print_timings","level":"INFO","line":293,"msg":" total time = 626.77 ms","slot_id":0,"t_prompt_processing":91.417,"t_token_generation":535.349,"t_total":626.7660000000001,"task_id":10,"tid":"23059827412992","timestamp":1714484162}
{"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":61,"n_ctx":2048,"n_past":60,"n_system_tokens":0,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484162,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484162}
[GIN] 2024/04/30 - 21:36:02 | 200 | 759.646073ms | 127.0.0.1 | POST "/api/chat"
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32
|
{
"login": "JialeLiLab",
"id": 151911704,
"node_id": "U_kgDOCQ39GA",
"avatar_url": "https://avatars.githubusercontent.com/u/151911704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JialeLiLab",
"html_url": "https://github.com/JialeLiLab",
"followers_url": "https://api.github.com/users/JialeLiLab/followers",
"following_url": "https://api.github.com/users/JialeLiLab/following{/other_user}",
"gists_url": "https://api.github.com/users/JialeLiLab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JialeLiLab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JialeLiLab/subscriptions",
"organizations_url": "https://api.github.com/users/JialeLiLab/orgs",
"repos_url": "https://api.github.com/users/JialeLiLab/repos",
"events_url": "https://api.github.com/users/JialeLiLab/events{/privacy}",
"received_events_url": "https://api.github.com/users/JialeLiLab/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4053/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5028
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5028/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5028/comments
|
https://api.github.com/repos/ollama/ollama/issues/5028/events
|
https://github.com/ollama/ollama/pull/5028
| 2,351,805,196
|
PR_kwDOJ0Z1Ps5yZiyU
| 5,028
|
OpenAI: /v1/models/{model} compatibility
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-13T18:29:28
| 2024-07-02T18:40:49
| 2024-07-02T18:40:48
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5028",
"html_url": "https://github.com/ollama/ollama/pull/5028",
"diff_url": "https://github.com/ollama/ollama/pull/5028.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5028.patch",
"merged_at": "2024-07-02T18:40:48"
}
|
Adds compatibility for `/v1/models/{model}`
E.g
`curl http://localhost:11434/v1/models/llama3`
```
{
"id": "llama3",
"object": "model",
"created": 1718141294,
"owned_by": "library"
}
```
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5028/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5028/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5737
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5737/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5737/comments
|
https://api.github.com/repos/ollama/ollama/issues/5737/events
|
https://github.com/ollama/ollama/issues/5737
| 2,412,422,641
|
I_kwDOJ0Z1Ps6PyqXx
| 5,737
|
Releases page: please also generate an archive with dependencies
|
{
"login": "vitaly-zdanevich",
"id": 3514015,
"node_id": "MDQ6VXNlcjM1MTQwMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3514015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitaly-zdanevich",
"html_url": "https://github.com/vitaly-zdanevich",
"followers_url": "https://api.github.com/users/vitaly-zdanevich/followers",
"following_url": "https://api.github.com/users/vitaly-zdanevich/following{/other_user}",
"gists_url": "https://api.github.com/users/vitaly-zdanevich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitaly-zdanevich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitaly-zdanevich/subscriptions",
"organizations_url": "https://api.github.com/users/vitaly-zdanevich/orgs",
"repos_url": "https://api.github.com/users/vitaly-zdanevich/repos",
"events_url": "https://api.github.com/users/vitaly-zdanevich/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitaly-zdanevich/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2024-07-17T01:47:20
| 2024-08-19T18:14:25
| 2024-08-19T18:14:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Gentoo Linux package manager is not so good about Go software :(
Related https://github.com/gentoo/guru/pull/215#issuecomment-2230746689
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5737/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/433
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/433/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/433/comments
|
https://api.github.com/repos/ollama/ollama/issues/433/events
|
https://github.com/ollama/ollama/issues/433
| 1,868,600,687
|
I_kwDOJ0Z1Ps5vYJVv
| 433
|
Add a way for user to approve CORS requests from origins without running a separate `ollama serve`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2023-08-27T19:14:09
| 2023-12-03T02:50:47
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Quite a few tools built on Ollama are hosted on alternate origins like `chrome-extension://` (chrome extensions) or `http://example.com` (websites that want to use the user's local Ollama instance).
Tools like Figma, Zoom and Slack have a way to "allow" the browser clients to interact with a server hosted on `localhost` by prompting the user once to approve an origin and then remembering this decision. We would need to be careful with the form factor here, but a dialog window (similar to Apple's [universal links](https://developer.apple.com/documentation/xcode/allowing-apps-and-websites-to-link-to-your-content)) could be a helpful starting point.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/433/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/433/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7224
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7224/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7224/comments
|
https://api.github.com/repos/ollama/ollama/issues/7224/events
|
https://github.com/ollama/ollama/issues/7224
| 2,591,436,600
|
I_kwDOJ0Z1Ps6adi84
| 7,224
|
Latest version silently drops support for macOS 11
|
{
"login": "close-paren",
"id": 3870093,
"node_id": "MDQ6VXNlcjM4NzAwOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3870093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/close-paren",
"html_url": "https://github.com/close-paren",
"followers_url": "https://api.github.com/users/close-paren/followers",
"following_url": "https://api.github.com/users/close-paren/following{/other_user}",
"gists_url": "https://api.github.com/users/close-paren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/close-paren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/close-paren/subscriptions",
"organizations_url": "https://api.github.com/users/close-paren/orgs",
"repos_url": "https://api.github.com/users/close-paren/repos",
"events_url": "https://api.github.com/users/close-paren/events{/privacy}",
"received_events_url": "https://api.github.com/users/close-paren/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-10-16T10:37:58
| 2024-10-17T18:35:11
| 2024-10-17T18:35:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
The ollama CLI binary now appears to be targeting macOS 12 and higher, whereas the frontend still runs on macOS 11. This results in very high CPU usage from ReportCrash due to continuous crashing and respawning.
Sample crash report:
```
Process: ollama [45502]
Path: /Applications/Ollama.app/Contents/Resources/ollama
Identifier: ollama
Version: 0
Code Type: X86-64 (Native)
Parent Process: Ollama [541]
Responsible: Ollama [541]
User ID: 501
Date/Time: 2024-10-15 17:18:15.107 +0100
OS Version: macOS 11.7.10 (20G1427)
Report Version: 12
Anonymous UUID: 686F63CD-C047-43CE-2313-FEDCDDD19B56
Sleep/Wake UUID: 246772B8-8C50-40CC-9722-70423F4DB8B7
Time Awake Since Boot: 140000 seconds
Time Since Wake: 24000 seconds
System Integrity Protection: enabled
Crashed Thread: 0
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Termination Reason: DYLD, [0x4] Symbol missing
Application Specific Information:
dyld: launch, loading dependent libraries
Dyld Error Message:
Symbol not found: __ZTTNSt3__114basic_ifstreamIcNS_11char_traitsIcEEEE
Referenced from: /Applications/Ollama.app/Contents/Resources/ollama (which was built for Mac OS X 12.0)
Expected in: /usr/lib/libc++.1.dylib
in /Applications/Ollama.app/Contents/Resources/ollama
Binary Images:
0x10d52c000 - 0x10e997fff +ollama (0) <E70F6508-9861-332D-A44F-53B632F09B77> /Applications/Ollama.app/Contents/Resources/ollama
0x1175bf000 - 0x11765afff dyld (852.2) <BD607394-9008-33B9-B98B-A5886668E52C> /usr/lib/dyld
0x7fff20366000 - 0x7fff20367fff libsystem_blocks.dylib (79) <F5B25F38-FC21-3BF5-A147-3B913DA098BE> /usr/lib/system/libsystem_blocks.dylib
0x7fff20368000 - 0x7fff2039dfff libxpc.dylib (2038.120.1.701.2) <151C64CA-CA6F-3989-A558-796EB6ED0C11> /usr/lib/system/libxpc.dylib
0x7fff2039e000 - 0x7fff203b5fff libsystem_trace.dylib (1277.120.1) <1F20357C-395F-3095-B525-AD9403290A92> /usr/lib/system/libsystem_trace.dylib
0x7fff203b6000 - 0x7fff20453fff libcorecrypto.dylib (1000.140.4) <BDD3FF5E-34F8-3AC0-A05C-F9AC17C88BBF> /usr/lib/system/libcorecrypto.dylib
0x7fff20454000 - 0x7fff20480fff libsystem_malloc.dylib (317.140.5) <3AB4C7E9-C49C-3EB7-9370-370F3F655024> /usr/lib/system/libsystem_malloc.dylib
0x7fff20481000 - 0x7fff204c5fff libdispatch.dylib (1271.120.2) <5D824C33-C5E2-38A8-BD00-D934443DBDAB> /usr/lib/system/libdispatch.dylib
0x7fff204c6000 - 0x7fff204fffff libobjc.A.dylib (824.1) <A0961DED-3477-3856-A6BC-CFE2475CB2F4> /usr/lib/libobjc.A.dylib
0x7fff20500000 - 0x7fff20502fff libsystem_featureflags.dylib (28.60.1) <2BAC8770-AFC8-3FE2-B6C6-27CE44B2B2BA> /usr/lib/system/libsystem_featureflags.dylib
0x7fff20503000 - 0x7fff2058bfff libsystem_c.dylib (1439.141.1) <BC8BCEEA-CA52-32C7-9FF5-E444CF9EF66A> /usr/lib/system/libsystem_c.dylib
0x7fff2058c000 - 0x7fff205e1fff libc++.1.dylib (905.6) <5BA6B5ED-7842-3B13-86B0-00EB511CE2FE> /usr/lib/libc++.1.dylib
0x7fff205e2000 - 0x7fff205f7fff libc++abi.dylib (905.6) <B96FC1DD-0056-3E11-862A-C0BB8239FEA0> /usr/lib/libc++abi.dylib
0x7fff205f8000 - 0x7fff20627fff libsystem_kernel.dylib (7195.141.49.702.12) <BA061E84-6D44-3037-832D-E86D783FA917> /usr/lib/system/libsystem_kernel.dylib
0x7fff20628000 - 0x7fff20633fff libsystem_pthread.dylib (454.120.2.700.1) <409239A7-2E4E-31C7-87EB-EE50B7981204> /usr/lib/system/libsystem_pthread.dylib
0x7fff20634000 - 0x7fff2066ffff libdyld.dylib (852.2) <FD8DB5BC-F199-3524-9DC4-DAEC0E94712F> /usr/lib/system/libdyld.dylib
0x7fff20670000 - 0x7fff20679fff libsystem_platform.dylib (254.80.2) <52A77346-8AA5-3BB7-906D-C7503B491CF9> /usr/lib/system/libsystem_platform.dylib
0x7fff2067a000 - 0x7fff206a5fff libsystem_info.dylib (542.40.4) <406353B2-E48A-3D20-B08F-0AB26ED8A0B3> /usr/lib/system/libsystem_info.dylib
0x7fff206a6000 - 0x7fff20b43fff com.apple.CoreFoundation (6.9 - 1778.105) <B4B8042A-9415-3F26-91AC-735C968B0D95> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
0x7fff20b44000 - 0x7fff20d7bfff com.apple.LaunchServices (1122.45 - 1122.45) <42ED2E08-904B-3B62-B0B6-DACBE4988AAB> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/LaunchServices
0x7fff20d7c000 - 0x7fff20e50fff com.apple.gpusw.MetalTools (1.0 - 1) <72285C8A-5F98-31A0-9CA1-30CF4387584B> /System/Library/PrivateFrameworks/MetalTools.framework/Versions/A/MetalTools
0x7fff20e51000 - 0x7fff210adfff libBLAS.dylib (1336.140.1) <D4B16233-BAE7-3D63-BB59-5DCEC63345EB> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
0x7fff210ae000 - 0x7fff210fbfff com.apple.Lexicon-framework (1.0 - 86.2) <09EC8AE4-7FC7-3D2D-A6DD-C484B664B1D5> /System/Library/PrivateFrameworks/Lexicon.framework/Versions/A/Lexicon
0x7fff210fc000 - 0x7fff2116afff libSparse.dylib (106) <0FD77742-B7DB-3296-9D0F-0DEF7EB4FF7D> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libSparse.dylib
0x7fff2116b000 - 0x7fff211e8fff com.apple.SystemConfiguration (1.20 - 1.20) <D59BEA1F-BD5D-383A-8977-64F5B72F16C4> /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration
0x7fff211e9000 - 0x7fff2121dfff libCRFSuite.dylib (50) <2DADF4F9-0BD3-33CF-9939-979E69F2453C> /usr/lib/libCRFSuite.dylib
0x7fff2121e000 - 0x7fff21456fff libmecabra.dylib (929.10) <58AA4922-A668-3165-802C-5FB4DF848E40> /usr/lib/libmecabra.dylib
0x7fff21457000 - 0x7fff217b5fff com.apple.Foundation (6.9 - 1778.105) <4F4709DD-C198-3AA1-86A0-71D2F2FDD65D> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation
0x7fff217b6000 - 0x7fff2189efff com.apple.LanguageModeling (1.0 - 247.3) <EAAF99AF-2D5F-3EC5-B7F7-41D7236A09F3> /System/Library/PrivateFrameworks/LanguageModeling.framework/Versions/A/LanguageModeling
0x7fff2189f000 - 0x7fff219d5fff com.apple.CoreDisplay (237.4 - 237.4) <CDD47724-D213-3665-BD34-A51F374AE94F> /System/Library/Frameworks/CoreDisplay.framework/Versions/A/CoreDisplay
0x7fff219d6000 - 0x7fff21c46fff com.apple.audio.AudioToolboxCore (1.0 - 1181.72.5) <541A108B-D52C-3F7B-B004-F31E16243BAF> /System/Library/PrivateFrameworks/AudioToolboxCore.framework/Versions/A/AudioToolboxCore
0x7fff21c47000 - 0x7fff21e2cfff com.apple.CoreText (677.6.0.4 - 677.6.0.4) <1E81E372-02B3-3E9C-BB48-33DC80E49158> /System/Library/Frameworks/CoreText.framework/Versions/A/CoreText
0x7fff21e2d000 - 0x7fff224befff com.apple.audio.CoreAudio (5.0 - 5.0) <A5ED9C97-E177-388F-AED8-D760C6963377> /System/Library/Frameworks/CoreAudio.framework/Versions/A/CoreAudio
0x7fff224bf000 - 0x7fff22816fff com.apple.security (7.0 - 59754.141.1.702.3) <5A52B8E8-B1AF-3F29-AC97-5DBEE8C6A6AC> /System/Library/Frameworks/Security.framework/Versions/A/Security
0x7fff22817000 - 0x7fff22a76fff libicucore.A.dylib (66112.1) <9F2A881A-25DA-3386-9DCE-D2B67C2A4141> /usr/lib/libicucore.A.dylib
0x7fff22a77000 - 0x7fff22a80fff libsystem_darwin.dylib (1439.141.1) <75592BEC-777B-381F-8C07-15B8A4C712A7> /usr/lib/system/libsystem_darwin.dylib
0x7fff22a81000 - 0x7fff22d6cfff com.apple.CoreServices.CarbonCore (1307.3 - 1307.3) <76566083-9F9C-3055-812A-079693A69D32> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore
0x7fff22dac000 - 0x7fff22de6fff com.apple.CSStore (1122.45 - 1122.45) <65919E05-BE7E-39AC-8768-B32E41E325C0> /System/Library/PrivateFrameworks/CoreServicesStore.framework/Versions/A/CoreServicesStore
0x7fff22de7000 - 0x7fff22e95fff com.apple.framework.IOKit (2.0.2 - 1845.120.6) <A395F442-1253-3CA9-953F-7A235EEB7F67> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit
0x7fff22e96000 - 0x7fff22ea1fff libsystem_notify.dylib (279.40.4) <02E22D9D-01E2-361C-BB9A-B5BE18D28280> /usr/lib/system/libsystem_notify.dylib
0x7fff24308000 - 0x7fff2498efff libnetwork.dylib (2288.140.9) <2DE517EE-E318-366B-A7FA-AD5F62D007CB> /usr/lib/libnetwork.dylib
0x7fff2498f000 - 0x7fff24e2cfff com.apple.CFNetwork (1240.0.4.5 - 1240.0.4.5) <83B8DEAA-82EE-36DD-ADF8-45E8A807BC21> /System/Library/Frameworks/CFNetwork.framework/Versions/A/CFNetwork
0x7fff24e2d000 - 0x7fff24e3bfff libsystem_networkextension.dylib (1295.140.4.701.1) <9C5A85AC-C593-34FD-8481-5CFC05DE3897> /usr/lib/system/libsystem_networkextension.dylib
0x7fff24e3c000 - 0x7fff24e3cfff libenergytrace.dylib (22.100.1) <EDE247D7-22AC-3339-AC3E-04A5BD13E3F2> /usr/lib/libenergytrace.dylib
0x7fff24e3d000 - 0x7fff24e99fff libMobileGestalt.dylib (978.140.1) <AC0BF1F3-5052-3FD8-808D-CBF55B3F7551> /usr/lib/libMobileGestalt.dylib
0x7fff24e9a000 - 0x7fff24eb0fff libsystem_asl.dylib (385.0.2) <88F4051D-1CF5-314E-A952-247C38996E16> /usr/lib/system/libsystem_asl.dylib
0x7fff24eb1000 - 0x7fff24ec9fff com.apple.TCC (1.0 - 1) <898C8BE6-EBC0-3BEB-B898-2EF336802530> /System/Library/PrivateFrameworks/TCC.framework/Versions/A/TCC
0x7fff24eca000 - 0x7fff2522ffff com.apple.SkyLight (1.600.0 - 588.11) <370EFABF-8439-32C8-9136-0CB058CEE1A1> /System/Library/PrivateFrameworks/SkyLight.framework/Versions/A/SkyLight
0x7fff25230000 - 0x7fff258b9fff com.apple.CoreGraphics (2.0 - 1463.19.1) <C911B812-7401-3EAF-B365-A9B7B98B708A> /System/Library/Frameworks/CoreGraphics.framework/Versions/A/CoreGraphics
0x7fff258ba000 - 0x7fff259b1fff com.apple.ColorSync (4.13.0 - 3473.8.1) <B66E5ABE-B72B-3607-8FB6-3CE54142829A> /System/Library/Frameworks/ColorSync.framework/Versions/A/ColorSync
0x7fff261d4000 - 0x7fff261e9fff com.apple.ProtocolBuffer (1 - 285.24.10.20.1) <6EC4B8BC-C44A-3211-A0B5-A7298518231B> /System/Library/PrivateFrameworks/ProtocolBuffer.framework/Versions/A/ProtocolBuffer
0x7fff261ea000 - 0x7fff2639efff libsqlite3.dylib (321.4) <2CBF5CD2-BECF-331B-904C-A88A54C6F6ED> /usr/lib/libsqlite3.dylib
0x7fff2641c000 - 0x7fff26433fff com.apple.commonutilities (8.0 - 900) <4D28711F-3425-31EB-A9D5-3FA489461EA3> /System/Library/PrivateFrameworks/CommonUtilities.framework/Versions/A/CommonUtilities
0x7fff26434000 - 0x7fff264b3fff com.apple.BaseBoard (526 - 526) <8ABD1C28-584C-33E7-8BE8-4EFC5EEF1575> /System/Library/PrivateFrameworks/BaseBoard.framework/Versions/A/BaseBoard
0x7fff264b4000 - 0x7fff264fcfff com.apple.RunningBoardServices (1.0 - 505.100.9) <BE0AEF0C-A31A-32F1-8157-9560A1A24633> /System/Library/PrivateFrameworks/RunningBoardServices.framework/Versions/A/RunningBoardServices
0x7fff264fd000 - 0x7fff26571fff com.apple.AE (918.6 - 918.6) <677BFC57-B830-3090-9470-A21CB2A77C76> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/AE.framework/Versions/A/AE
0x7fff26572000 - 0x7fff26578fff libdns_services.dylib (1310.140.1) <EABE9A6A-96DE-3A2E-B0E0-17F277A65757> /usr/lib/libdns_services.dylib
0x7fff26579000 - 0x7fff26580fff libsystem_symptoms.dylib (1431.140.1) <E9CB193F-260B-3835-B76E-A2209343FA1E> /usr/lib/system/libsystem_symptoms.dylib
0x7fff2670d000 - 0x7fff2673cfff com.apple.analyticsd (1.0 - 1) <23CB7B45-967B-37B3-AF21-21B4885790CC> /System/Library/PrivateFrameworks/CoreAnalytics.framework/Versions/A/CoreAnalytics
0x7fff2673d000 - 0x7fff2673ffff libDiagnosticMessagesClient.dylib (112) <8CE0D64A-597F-3048-80C3-590D866D067A> /usr/lib/libDiagnosticMessagesClient.dylib
0x7fff26740000 - 0x7fff2678cfff com.apple.spotlight.metadata.utilities (1.0 - 2150.30) <9B61E5D5-27C3-3282-A650-A2D15FA76FF7> /System/Library/PrivateFrameworks/MetadataUtilities.framework/Versions/A/MetadataUtilities
0x7fff2678d000 - 0x7fff26827fff com.apple.Metadata (10.7.0 - 2150.30) <FEBC2256-7D84-3F2E-A770-A8665F62E20A> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/Metadata.framework/Versions/A/Metadata
0x7fff26828000 - 0x7fff2682efff com.apple.DiskArbitration (2.7 - 2.7) <21325211-A5F7-3AB9-BDFE-6B6DC06E587E> /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration
0x7fff2682f000 - 0x7fff26e96fff com.apple.vImage (8.1 - 544.6) <1DD123D7-ACC3-3FCB-838E-C91C6E4D31B8> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage
0x7fff26e97000 - 0x7fff27174fff com.apple.QuartzCore (1.11 - 927.24) <C2F144F0-B972-390B-B0DC-51B79F3B4DF2> /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore
0x7fff27175000 - 0x7fff271b6fff libFontRegistry.dylib (309.0.0.2) <D13D1774-9FC2-3A3B-BFD0-8ABFAF9AE1E0> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontRegistry.dylib
0x7fff273f0000 - 0x7fff273fffff com.apple.OpenDirectory (11.7 - 230.40.1) <B7BB547E-B00F-37B3-A4A8-AF414F029E64> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/OpenDirectory
0x7fff27400000 - 0x7fff2741ffff com.apple.CFOpenDirectory (11.7 - 230.40.1) <E4682D99-DD7C-3C74-A0A1-E561B6E616C6> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/Frameworks/CFOpenDirectory.framework/Versions/A/CFOpenDirectory
0x7fff27420000 - 0x7fff2742cfff com.apple.CoreServices.FSEvents (1290.120.6 - 1290.120.6) <78184C84-4633-3867-AACD-8F0256F40D5A> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/FSEvents.framework/Versions/A/FSEvents
0x7fff2742d000 - 0x7fff27451fff com.apple.coreservices.SharedFileList (144 - 144) <243CAB7D-EA1A-3322-9833-B4B24F63AB3E> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SharedFileList.framework/Versions/A/SharedFileList
0x7fff27452000 - 0x7fff27454fff libapp_launch_measurement.dylib (14.1) <2AE731D8-757E-3A23-8375-9D266B762CC3> /usr/lib/libapp_launch_measurement.dylib
0x7fff27455000 - 0x7fff2749cfff com.apple.CoreAutoLayout (1.0 - 21.10.1) <32846C89-8FED-3225-B370-34FB1DA82A85> /System/Library/PrivateFrameworks/CoreAutoLayout.framework/Versions/A/CoreAutoLayout
0x7fff2749d000 - 0x7fff2757ffff libxml2.2.dylib (34.26) <29CE75F5-D4D3-35BD-9B89-3B8970980C55> /usr/lib/libxml2.2.dylib
0x7fff27580000 - 0x7fff275cdfff com.apple.CoreVideo (1.8 - 0.0) <FE8A9F52-4140-36D8-BE09-800E75857EAD> /System/Library/Frameworks/CoreVideo.framework/Versions/A/CoreVideo
0x7fff28588000 - 0x7fff28598fff libsystem_containermanager.dylib (318.100.4.700.1) <45445167-AFC7-3406-A858-9AE8D8F45907> /usr/lib/system/libsystem_containermanager.dylib
0x7fff28599000 - 0x7fff285aafff com.apple.IOSurface (290.8.2 - 290.8.2) <B98B7126-FFF8-343F-BE66-32212DE3BEBE> /System/Library/Frameworks/IOSurface.framework/Versions/A/IOSurface
0x7fff285ab000 - 0x7fff285b4fff com.apple.IOAccelerator (442.10 - 442.10) <19FBA808-F918-3BB2-BE78-A1B0D10D724D> /System/Library/PrivateFrameworks/IOAccelerator.framework/Versions/A/IOAccelerator
0x7fff285b5000 - 0x7fff286d8fff com.apple.Metal (244.303 - 244.303) <A9397F90-E221-397B-BA10-B52135A72D68> /System/Library/Frameworks/Metal.framework/Versions/A/Metal
0x7fff286d9000 - 0x7fff286f5fff com.apple.audio.caulk (1.0 - 70) <A8D1B95D-91AF-3FF5-9CD7-93661045C83A> /System/Library/PrivateFrameworks/caulk.framework/Versions/A/caulk
0x7fff286f6000 - 0x7fff287e0fff com.apple.CoreMedia (1.0 - 2780.10.4.1.1) <237F9A0B-FE1F-3E4D-BE4D-8E27989EA845> /System/Library/Frameworks/CoreMedia.framework/Versions/A/CoreMedia
0x7fff287e1000 - 0x7fff2893dfff libFontParser.dylib (305.6.0.6) <E2A105F9-8B39-3A45-94CE-6BF2DB31EB25> /System/Library/PrivateFrameworks/FontServices.framework/libFontParser.dylib
0x7fff28c7c000 - 0x7fff28eb9fff com.apple.ImageIO (3.3.0 - 2130.16.2) <15C8679A-DFE6-3664-977F-A82A97FD59C1> /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO
0x7fff28eba000 - 0x7fff29235fff com.apple.CoreImage (16.3.0 - 1140.2) <3C2DE86C-89FA-3EC1-ACFE-6BE95F54B123> /System/Library/Frameworks/CoreImage.framework/Versions/A/CoreImage
0x7fff29236000 - 0x7fff2929cfff com.apple.MetalPerformanceShaders.MPSCore (1.0 - 1) <02F2E0C6-0C0F-3390-A63B-189832967015> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSCore.framework/Versions/A/MPSCore
0x7fff2929d000 - 0x7fff292a0fff libsystem_configuration.dylib (1109.140.1) <53B71513-3009-3A8C-A5AA-9C15DD0AB54E> /usr/lib/system/libsystem_configuration.dylib
0x7fff292a1000 - 0x7fff292a5fff libsystem_sandbox.dylib (1441.141.13.701.2) <1E19BC49-484C-32BB-8BB7-99D41C63F86E> /usr/lib/system/libsystem_sandbox.dylib
0x7fff292a6000 - 0x7fff292a7fff com.apple.AggregateDictionary (1.0 - 1) <CD5E6E8F-7AB6-345E-9243-D5D674DC0225> /System/Library/PrivateFrameworks/AggregateDictionary.framework/Versions/A/AggregateDictionary
0x7fff292a8000 - 0x7fff292abfff com.apple.AppleSystemInfo (3.1.5 - 3.1.5) <15CBB967-FAAE-3A22-A87F-4833A9D835E3> /System/Library/PrivateFrameworks/AppleSystemInfo.framework/Versions/A/AppleSystemInfo
0x7fff292ac000 - 0x7fff292adfff liblangid.dylib (136) <D6DDBEB6-7A9A-3F00-8DEF-18934CFC0A08> /usr/lib/liblangid.dylib
0x7fff292ae000 - 0x7fff29352fff com.apple.CoreNLP (1.0 - 245.2) <F40C2289-9A6D-3C55-A6DA-FFAD41636415> /System/Library/PrivateFrameworks/CoreNLP.framework/Versions/A/CoreNLP
0x7fff29353000 - 0x7fff29359fff com.apple.LinguisticData (1.0 - 399) <E6DC793D-3133-3D9B-BCF8-E4A628E45586> /System/Library/PrivateFrameworks/LinguisticData.framework/Versions/A/LinguisticData
0x7fff2935a000 - 0x7fff29a02fff libBNNS.dylib (288.100.5) <1E45AC70-6C75-3F27-9252-40DF6B2D674A> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBNNS.dylib
0x7fff29a03000 - 0x7fff29bd5fff libvDSP.dylib (760.100.3) <7F1276C0-C9F6-3C6F-A0F7-1EB4EA666BD8> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib
0x7fff29bd6000 - 0x7fff29be7fff com.apple.CoreEmoji (1.0 - 128.4) <011AA15B-6988-3F36-81A3-2B52B561D6E0> /System/Library/PrivateFrameworks/CoreEmoji.framework/Versions/A/CoreEmoji
0x7fff29be8000 - 0x7fff29bf2fff com.apple.IOMobileFramebuffer (343.0.0 - 343.0.0) <28991DA2-1726-3F77-A9C5-4BB5AAEFA166> /System/Library/PrivateFrameworks/IOMobileFramebuffer.framework/Versions/A/IOMobileFramebuffer
0x7fff29eeb000 - 0x7fff29efbfff com.apple.AssertionServices (1.0 - 505.100.9) <1B805E53-D42F-3019-88F0-64D3BD287DDB> /System/Library/PrivateFrameworks/AssertionServices.framework/Versions/A/AssertionServices
0x7fff29efc000 - 0x7fff29f87fff com.apple.securityfoundation (6.0 - 55240.40.4) <D1E23625-27EF-37F4-93B8-E3162C1943BA> /System/Library/Frameworks/SecurityFoundation.framework/Versions/A/SecurityFoundation
0x7fff29f88000 - 0x7fff29f91fff com.apple.coreservices.BackgroundTaskManagement (1.0 - 104) <8CF5B495-3026-3CE1-9EFC-8D7D71380A43> /System/Library/PrivateFrameworks/BackgroundTaskManagement.framework/Versions/A/BackgroundTaskManagement
0x7fff29f92000 - 0x7fff29f96fff com.apple.xpc.ServiceManagement (1.0 - 1) <D561E8B7-690C-3D18-A1E8-C4B01B8B9C11> /System/Library/Frameworks/ServiceManagement.framework/Versions/A/ServiceManagement
0x7fff29f97000 - 0x7fff29f99fff libquarantine.dylib (119.40.4) <21C63859-6DFB-3463-9ADF-BB44FB28067C> /usr/lib/system/libquarantine.dylib
0x7fff29f9a000 - 0x7fff29fa5fff libCheckFix.dylib (31) <1C2B822D-29D6-36E2-BBA3-F72DE49E038B> /usr/lib/libCheckFix.dylib
0x7fff29fa6000 - 0x7fff29fbdfff libcoretls.dylib (169.100.1) <FC8265A0-9659-35D9-BA6F-6507A44742FE> /usr/lib/libcoretls.dylib
0x7fff29fbe000 - 0x7fff29fcefff libbsm.0.dylib (68.40.1) <0CF67F8A-268D-320A-A3A4-D7C2D9AB8027> /usr/lib/libbsm.0.dylib
0x7fff29fcf000 - 0x7fff2a018fff libmecab.dylib (929.10) <47A982DF-1436-366E-AC45-1DA068832AED> /usr/lib/libmecab.dylib
0x7fff2a019000 - 0x7fff2a01efff libgermantok.dylib (24) <189F508A-723B-345D-918F-178CF15077F3> /usr/lib/libgermantok.dylib
0x7fff2a01f000 - 0x7fff2a034fff libLinearAlgebra.dylib (1336.140.1) <27358E5F-256F-309F-AAC8-BAC4A56C7BF4> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLinearAlgebra.dylib
0x7fff2a035000 - 0x7fff2a253fff com.apple.MetalPerformanceShaders.MPSNeuralNetwork (1.0 - 1) <B7F8218A-2DA2-35A4-9200-3BD52CCF125C> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSNeuralNetwork.framework/Versions/A/MPSNeuralNetwork
0x7fff2a254000 - 0x7fff2a2a3fff com.apple.MetalPerformanceShaders.MPSRayIntersector (1.0 - 1) <3993AC67-62B4-3E49-B5BF-E8F814CE6C97> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSRayIntersector.framework/Versions/A/MPSRayIntersector
0x7fff2a2a4000 - 0x7fff2a405fff com.apple.MLCompute (1.0 - 1) <6026D664-0453-321F-81FE-A40AD902849E> /System/Library/Frameworks/MLCompute.framework/Versions/A/MLCompute
0x7fff2a406000 - 0x7fff2a43cfff com.apple.MetalPerformanceShaders.MPSMatrix (1.0 - 1) <A194A321-8DD9-3051-97EC-3C4630946007> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSMatrix.framework/Versions/A/MPSMatrix
0x7fff2a43d000 - 0x7fff2a493fff com.apple.MetalPerformanceShaders.MPSNDArray (1.0 - 1) <A72429D4-3BED-34DD-BEDE-322A0975A8BC> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSNDArray.framework/Versions/A/MPSNDArray
0x7fff2a494000 - 0x7fff2a524fff com.apple.MetalPerformanceShaders.MPSImage (1.0 - 1) <0B333F06-FAD5-3689-9017-15334AD4F51C> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSImage.framework/Versions/A/MPSImage
0x7fff2a525000 - 0x7fff2a534fff com.apple.AppleFSCompression (125 - 1.0) <1C5279EE-8F78-386E-9E4D-24A3785CACA2> /System/Library/PrivateFrameworks/AppleFSCompression.framework/Versions/A/AppleFSCompression
0x7fff2a535000 - 0x7fff2a541fff libbz2.1.0.dylib (44) <6E82D414-3810-36CF-94FF-B1BDF48DB501> /usr/lib/libbz2.1.0.dylib
0x7fff2a542000 - 0x7fff2a546fff libsystem_coreservices.dylib (127.1) <6D84FA08-CB2B-34E1-9AB4-A54E82CB9161> /usr/lib/system/libsystem_coreservices.dylib
0x7fff2a547000 - 0x7fff2a574fff com.apple.CoreServices.OSServices (1122.45 - 1122.45) <097586DB-22C5-323A-BC5C-5AF75613846D> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/OSServices.framework/Versions/A/OSServices
0x7fff2a743000 - 0x7fff2a755fff libz.1.dylib (76.140.1) <A2FF8D14-1632-3047-9829-CC90239F97FF> /usr/lib/libz.1.dylib
0x7fff2a756000 - 0x7fff2a79dfff libsystem_m.dylib (3186.100.3) <1836B380-C579-3195-BC3F-77404D432186> /usr/lib/system/libsystem_m.dylib
0x7fff2a79e000 - 0x7fff2a79efff libcharset.1.dylib (59) <3A46C22D-E678-356B-9BAD-6E837704D662> /usr/lib/libcharset.1.dylib
0x7fff2a79f000 - 0x7fff2a7a4fff libmacho.dylib (980) <F7BDAFE5-4E49-39DD-8F94-CD5E49C91A90> /usr/lib/system/libmacho.dylib
0x7fff2a7a5000 - 0x7fff2a7c0fff libkxld.dylib (7195.141.49.702.12) <6585C769-FACC-3E47-844B-C7011292F3C5> /usr/lib/system/libkxld.dylib
0x7fff2a7c1000 - 0x7fff2a7ccfff libcommonCrypto.dylib (60178.120.3) <B057F752-3057-394D-A3F6-AA11A04A6392> /usr/lib/system/libcommonCrypto.dylib
0x7fff2a7cd000 - 0x7fff2a7d7fff libunwind.dylib (201) <9D6A6228-8DC3-3521-B458-4EDE4A9F5E65> /usr/lib/system/libunwind.dylib
0x7fff2a7d8000 - 0x7fff2a7dffff liboah.dylib (203.58) <AC9E8A76-FCAA-3F97-802A-D22EF770463B> /usr/lib/liboah.dylib
0x7fff2a7e0000 - 0x7fff2a7eafff libcopyfile.dylib (173.40.2) <BD7EAE7B-28C1-36DF-96B8-F506D50DFF28> /usr/lib/system/libcopyfile.dylib
0x7fff2a7eb000 - 0x7fff2a7f2fff libcompiler_rt.dylib (102.2) <BA910DC2-C697-3DAD-9A70-7C8CD5217AC3> /usr/lib/system/libcompiler_rt.dylib
0x7fff2a7f3000 - 0x7fff2a7f5fff libsystem_collections.dylib (1439.141.1) <21F2EF42-56ED-3E0F-9C29-94E0888DC52C> /usr/lib/system/libsystem_collections.dylib
0x7fff2a7f6000 - 0x7fff2a7f8fff libsystem_secinit.dylib (87.60.1) <E976428F-F9E2-334B-AA91-9AAD40234718> /usr/lib/system/libsystem_secinit.dylib
0x7fff2a7f9000 - 0x7fff2a7fbfff libremovefile.dylib (49.120.1) <5AC9F8EC-F0E8-3D8A-ADB5-96B5FB581896> /usr/lib/system/libremovefile.dylib
0x7fff2a7fc000 - 0x7fff2a7fcfff libkeymgr.dylib (31) <9FBE08F6-0679-3976-AFDC-1EAF40C3958F> /usr/lib/system/libkeymgr.dylib
0x7fff2a7fd000 - 0x7fff2a804fff libsystem_dnssd.dylib (1310.140.1) <8C4D6C93-285F-3587-A986-5BB96A1C664F> /usr/lib/system/libsystem_dnssd.dylib
0x7fff2a805000 - 0x7fff2a80afff libcache.dylib (83) <56DCEFF5-111E-32FD-B4E9-E148507C4FEC> /usr/lib/system/libcache.dylib
0x7fff2a80b000 - 0x7fff2a80cfff libSystem.B.dylib (1292.120.1) <A8E7368E-58FA-31E5-8D4D-FC2FED6100E6> /usr/lib/libSystem.B.dylib
0x7fff2a80d000 - 0x7fff2a810fff libfakelink.dylib (3) <6002BC93-3627-366E-8D21-A552D56CB215> /usr/lib/libfakelink.dylib
0x7fff2a811000 - 0x7fff2a811fff com.apple.SoftLinking (1.0 - 1) <3D0CEDFD-B263-39CA-8B31-E0A498D05EB3> /System/Library/PrivateFrameworks/SoftLinking.framework/Versions/A/SoftLinking
0x7fff2a812000 - 0x7fff2a849fff libpcap.A.dylib (98.100.3) <236EE73F-6D38-38E0-9BC0-B427DEB7F9FD> /usr/lib/libpcap.A.dylib
0x7fff2a84a000 - 0x7fff2a93afff libiconv.2.dylib (59) <DEE0153A-BDF9-33CA-B8C7-3C39DB906B5E> /usr/lib/libiconv.2.dylib
0x7fff2a93b000 - 0x7fff2a94cfff libcmph.dylib (8) <83A69507-07D1-387F-9D06-1011E7909EAC> /usr/lib/libcmph.dylib
0x7fff2a94d000 - 0x7fff2a9befff libarchive.2.dylib (83.100.2) <45B577F5-0064-3E73-89B8-BE4A121B214F> /usr/lib/libarchive.2.dylib
0x7fff2a9bf000 - 0x7fff2aa26fff com.apple.SearchKit (1.4.1 - 1.4.1) <7C264603-379D-38BF-A3EC-49C01059C5E5> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/Versions/A/SearchKit
0x7fff2aa27000 - 0x7fff2aa28fff libThaiTokenizer.dylib (3) <BA265C01-176E-3F7D-97F6-7FAABB0CAEC8> /usr/lib/libThaiTokenizer.dylib
0x7fff2aa29000 - 0x7fff2aa4bfff com.apple.applesauce (1.0 - 16.28) <EAFF4FEC-51F3-3D0D-9D99-E62E75937F1B> /System/Library/PrivateFrameworks/AppleSauce.framework/Versions/A/AppleSauce
0x7fff2aa4c000 - 0x7fff2aa63fff libapple_nghttp2.dylib (1.41) <AC9520D7-D54F-3031-9503-FEA5A5ED5E56> /usr/lib/libapple_nghttp2.dylib
0x7fff2aa64000 - 0x7fff2aa7afff libSparseBLAS.dylib (1336.140.1) <7D926256-F187-33CA-87D6-74F1660C438A> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libSparseBLAS.dylib
0x7fff2aa7b000 - 0x7fff2aa7cfff com.apple.MetalPerformanceShaders.MetalPerformanceShaders (1.0 - 1) <9BFE310E-E910-3228-BDF5-21A7C4468D89> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/MetalPerformanceShaders
0x7fff2aa7d000 - 0x7fff2aa82fff libpam.2.dylib (28.40.1.700.1) <564320AF-69E5-3FEE-BE3A-E500B9B6786F> /usr/lib/libpam.2.dylib
0x7fff2aa83000 - 0x7fff2aaa2fff libcompression.dylib (96.120.1) <F36054C1-6074-3A22-82EF-6F4A2A52599C> /usr/lib/libcompression.dylib
0x7fff2aaa3000 - 0x7fff2aaa8fff libQuadrature.dylib (7) <256CB21E-2878-3F22-B4B5-E1FB60D64C9E> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libQuadrature.dylib
0x7fff2aaa9000 - 0x7fff2ae46fff libLAPACK.dylib (1336.140.1) <02F2D4D1-8763-32D1-B5F9-9DD439EFC8E8> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib
0x7fff2ae47000 - 0x7fff2ae96fff com.apple.DictionaryServices (1.2 - 341) <FB843860-C7D5-3060-B50E-303A3CBAE9A9> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/DictionaryServices.framework/Versions/A/DictionaryServices
0x7fff2ae97000 - 0x7fff2aeaffff liblzma.5.dylib (16) <455C9083-014D-3037-AC54-1395F3796734> /usr/lib/liblzma.5.dylib
0x7fff2aeb0000 - 0x7fff2aeb1fff libcoretls_cfhelpers.dylib (169.100.1) <6760D250-2628-3DA2-A8A4-6F438E09527A> /usr/lib/libcoretls_cfhelpers.dylib
0x7fff2aeb2000 - 0x7fff2afadfff com.apple.APFS (1677.141.3 - 1677.141.3) <E4B0DF0F-E1A5-3FEF-A2A6-8105AD54D95A> /System/Library/PrivateFrameworks/APFS.framework/Versions/A/APFS
0x7fff2afae000 - 0x7fff2afbcfff libxar.1.dylib (452.140.1) <9E460111-1BBC-31FE-8CAF-FA8AEC22C1E9> /usr/lib/libxar.1.dylib
0x7fff2afbd000 - 0x7fff2afc0fff libutil.dylib (58.40.3) <B5961283-0856-3D78-AE9C-EAFB6A903569> /usr/lib/libutil.dylib
0x7fff2afc1000 - 0x7fff2afe9fff libxslt.1.dylib (17.10) <52B300FD-B3F6-3689-9554-98B543A298C7> /usr/lib/libxslt.1.dylib
0x7fff2afea000 - 0x7fff2aff4fff libChineseTokenizer.dylib (37.1) <62BC78D3-1400-3366-A04E-C8BEE6AC00B5> /usr/lib/libChineseTokenizer.dylib
0x7fff2aff5000 - 0x7fff2b0b2fff libvMisc.dylib (760.100.3) <560739C2-D16B-36CA-89F4-BD4DD2192333> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib
0x7fff2b0b3000 - 0x7fff2b14afff libate.dylib (3.0.6) <CCD85C79-E892-339A-B0AB-E385D4F635F5> /usr/lib/libate.dylib
0x7fff2b14b000 - 0x7fff2b152fff libIOReport.dylib (64.100.1) <0997845A-1FF3-35B3-A5ED-2FB16D07F624> /usr/lib/libIOReport.dylib
0x7fff2b2e8000 - 0x7fff2b33bfff com.apple.AppleVAFramework (6.1.3 - 6.1.3) <1D2A99E8-473C-3C56-BD50-98549FDC8932> /System/Library/PrivateFrameworks/AppleVA.framework/Versions/A/AppleVA
0x7fff2b33c000 - 0x7fff2b355fff libexpat.1.dylib (26.141.1) <0416D6BA-2AEB-3BBA-8584-FA28E62E8007> /usr/lib/libexpat.1.dylib
0x7fff2b481000 - 0x7fff2b4a9fff com.apple.persistentconnection (1.0 - 1.0) <CF93C3EC-D1D1-3BAE-92B1-5D85A969F748> /System/Library/PrivateFrameworks/PersistentConnection.framework/Versions/A/PersistentConnection
0x7fff2b4aa000 - 0x7fff2b4b8fff com.apple.GraphVisualizer (1.0 - 100.1) <928039C0-8548-38A0-BBE9-6AA807CCE7B7> /System/Library/PrivateFrameworks/GraphVisualizer.framework/Versions/A/GraphVisualizer
0x7fff2b4b9000 - 0x7fff2b8d4fff com.apple.vision.FaceCore (4.3.2 - 4.3.2) <13FFCD22-55DB-301B-9C6F-03C94266591B> /System/Library/PrivateFrameworks/FaceCore.framework/Versions/A/FaceCore
0x7fff2b8d5000 - 0x7fff2b91cfff com.apple.OTSVG (1.0 - 677.6.0.4) <F89F3381-E8B0-3964-995E-6213AD6955B2> /System/Library/PrivateFrameworks/OTSVG.framework/Versions/A/OTSVG
0x7fff2b91d000 - 0x7fff2b923fff com.apple.xpc.AppServerSupport (1.0 - 2038.120.1.701.2) <C50E01B1-3B25-3237-A424-0E1FEBC3A890> /System/Library/PrivateFrameworks/AppServerSupport.framework/Versions/A/AppServerSupport
0x7fff2b924000 - 0x7fff2b936fff libhvf.dylib (1.0 - $[CURRENT_PROJECT_VERSION]) <24525668-9221-3A7B-8C64-CDAE418049DA> /System/Library/PrivateFrameworks/FontServices.framework/libhvf.dylib
0x7fff2b937000 - 0x7fff2b939fff libspindump.dylib (295.3) <E090FDF1-6C64-39B9-A750-63EC6AB7E0C1> /usr/lib/libspindump.dylib
0x7fff2bb9a000 - 0x7fff2bbd6fff com.apple.bom (14.0 - 235.3) <B849E4CE-F977-3F27-857C-8000AD4FE041> /System/Library/PrivateFrameworks/Bom.framework/Versions/A/Bom
0x7fff2bbd7000 - 0x7fff2bc20fff com.apple.AppleJPEG (1.0 - 1) <2A8FF19B-B937-3F0B-A59B-A9A8B24F53E6> /System/Library/PrivateFrameworks/AppleJPEG.framework/Versions/A/AppleJPEG
0x7fff2bc21000 - 0x7fff2bd00fff libJP2.dylib (2130.16.2) <967ABC46-39E1-38CC-8BE5-FDE37AFB77BD> /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libJP2.dylib
0x7fff2bd01000 - 0x7fff2bd04fff com.apple.WatchdogClient.framework (1.0 - 98.120.2) <2907CD2F-A115-34CF-BB9D-DCFEBAEE78F3> /System/Library/PrivateFrameworks/WatchdogClient.framework/Versions/A/WatchdogClient
0x7fff2bd05000 - 0x7fff2bd3bfff com.apple.MultitouchSupport.framework (4440.3.1 - 4440.3.1) <9F19F332-ADC4-3D35-A9ED-C62AC334A704> /System/Library/PrivateFrameworks/MultitouchSupport.framework/Versions/A/MultitouchSupport
0x7fff2bd3c000 - 0x7fff2be9afff com.apple.VideoToolbox (1.0 - 2780.10.4.1.1) <774F6519-3AF4-3B1C-BCA8-72B287B8CF05> /System/Library/Frameworks/VideoToolbox.framework/Versions/A/VideoToolbox
0x7fff2be9b000 - 0x7fff2becefff libAudioToolboxUtility.dylib (1181.72.5) <F1E3759E-BE9B-30CE-97F0-669ECD0F012F> /usr/lib/libAudioToolboxUtility.dylib
0x7fff2becf000 - 0x7fff2beeffff libPng.dylib (2130.16.2) <BE17D73D-2A4F-3A46-85DE-A523D58A82F5> /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libPng.dylib
0x7fff2bef0000 - 0x7fff2bf51fff libTIFF.dylib (2130.16.2) <48B3B58D-BC6F-3D7B-988C-729475BFACCA> /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib
0x7fff2bf52000 - 0x7fff2bf6efff com.apple.IOPresentment (58 - 37) <BA4B88DD-52FF-3F07-82E2-533E7D6548C0> /System/Library/PrivateFrameworks/IOPresentment.framework/Versions/A/IOPresentment
0x7fff2bf6f000 - 0x7fff2bf76fff com.apple.GPUWrangler (6.3.6 - 6.3.6) <D41B27C4-9FC6-3F2F-8C65-06F9ED6EE43D> /System/Library/PrivateFrameworks/GPUWrangler.framework/Versions/A/GPUWrangler
0x7fff2bf77000 - 0x7fff2bf7afff libRadiance.dylib (2130.16.2) <5B7B2CA5-D6ED-3A8E-97BC-A3D99EE023F9> /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libRadiance.dylib
0x7fff2bf7b000 - 0x7fff2bf80fff com.apple.DSExternalDisplay (3.1 - 380) <85C7B7A3-8E61-3773-A7A1-0E594C390144> /System/Library/PrivateFrameworks/DSExternalDisplay.framework/Versions/A/DSExternalDisplay
0x7fff2bf81000 - 0x7fff2bfa5fff libJPEG.dylib (2130.16.2) <164FF5FC-2979-394D-8474-D35A20394794> /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libJPEG.dylib
0x7fff2bfd6000 - 0x7fff2bfdafff libGIF.dylib (2130.16.2) <3EDEAB7F-7268-3DA5-898D-1C21579DC859> /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libGIF.dylib
0x7fff2bfdb000 - 0x7fff2bfe4fff com.apple.CMCaptureCore (1.0 - 82.6) <E96DDCE5-8BCF-392D-84C6-87D76BC9DF9A> /System/Library/PrivateFrameworks/CMCaptureCore.framework/Versions/A/CMCaptureCore
0x7fff2d0a7000 - 0x7fff2d0bffff libresolv.9.dylib (68.2) <78461752-674E-35D5-94D6-17E627802577> /usr/lib/libresolv.9.dylib
0x7fff2d1f2000 - 0x7fff2d2d1fff libSMC.dylib (20) <038349BC-2D55-3E20-B3C3-BFEEA36E730A> /usr/lib/libSMC.dylib
0x7fff2d9b6000 - 0x7fff2d9c1fff com.apple.perfdata (1.0 - 67.40.1) <4A3F192A-2416-3D16-B318-A160264359DE> /System/Library/PrivateFrameworks/perfdata.framework/Versions/A/perfdata
0x7fff2dc13000 - 0x7fff2dc13fff liblaunch.dylib (2038.120.1.701.2) <B79B00B1-954F-3EC4-9E22-A24E25CAE88D> /usr/lib/system/liblaunch.dylib
0x7fff300af000 - 0x7fff300affff libsystem_product_info_filter.dylib (8.40.1) <BB06C92C-6BD7-310C-A176-DC0DCE8D9F2B> /usr/lib/system/libsystem_product_info_filter.dylib
0x7fff30187000 - 0x7fff30187fff com.apple.Accelerate.vecLib (3.11 - vecLib 3.11) <F46E0ACF-7524-3CA3-A64A-5DDF6081EB67> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib
0x7fff301ad000 - 0x7fff301adfff com.apple.CoreServices (1122.45 - 1122.45) <05DA2462-9BFC-38D9-820A-8842710471D6> /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices
0x7fff30369000 - 0x7fff30369fff com.apple.Accelerate (1.11 - Accelerate 1.11) <3D8DECC6-19B3-3A32-98CF-EB07536D1635> /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate
0x7fff3ce7b000 - 0x7fff3ce9afff com.apple.private.SystemPolicy (1.0 - 1) <89781097-BA30-3186-B368-3830D1FE4FC0> /System/Library/PrivateFrameworks/SystemPolicy.framework/Versions/A/SystemPolicy
0x7fff3d7d8000 - 0x7fff3d7eafff libmis.dylib (274.140.2) <D8990A31-A5AE-3535-BE5E-3EE9A8D0C9F9> /usr/lib/libmis.dylib
0x7fff6bb3a000 - 0x7fff6bb40fff libCoreFSCache.dylib (200.12) <B6360761-3B05-35AE-8E0C-F819414FD093> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCoreFSCache.dylib
0x7fff6bb41000 - 0x7fff6bb45fff libCoreVMClient.dylib (200.12) <4D617E02-85B8-3BC2-82FE-5CEA77809181> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCoreVMClient.dylib
0x7fff6bb46000 - 0x7fff6bb55fff com.apple.opengl (18.5.9 - 18.5.9) <610E765C-8C0D-3422-AD6E-636D3EBD2AD0> /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL
0x7fff6bb56000 - 0x7fff6bb58fff libCVMSPluginSupport.dylib (18.5.9) <C7B33518-2685-3985-ABCB-FC3B0105748C> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCVMSPluginSupport.dylib
0x7fff6bb59000 - 0x7fff6bb61fff libGFXShared.dylib (18.5.9) <AD53ED9A-C694-3EF4-863E-898E88F6D84C> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.dylib
0x7fff6bb62000 - 0x7fff6bb95fff libGLImage.dylib (18.5.9) <6C478ED9-E513-3E43-B9F1-D15A20A0EE85> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib
0x7fff6bb96000 - 0x7fff6bbd2fff libGLU.dylib (18.5.9) <D5473328-FD13-36A5-9CAC-CF4F59CDAA29> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib
0x7fff6bd67000 - 0x7fff6bd71fff libGL.dylib (18.5.9) <08A0476A-D04F-3816-AA4D-11EC2467F748> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib
0x7fff6d1b1000 - 0x7fff6d209fff com.apple.opencl (4.6 - 4.6) <8C87D26C-12C6-33E8-AE32-45B699667DB3> /System/Library/Frameworks/OpenCL.framework/Versions/A/OpenCL
```
If the latest version cannot be targeted to macOS 11 then a means to ensure the application doesn't automatically update past the point of compatibility and break itself would be desirable.
### OS
macOS
### GPU
Intel
### CPU
Intel
### Ollama version
0.3.13
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7224/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7224/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2971
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2971/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2971/comments
|
https://api.github.com/repos/ollama/ollama/issues/2971/events
|
https://github.com/ollama/ollama/issues/2971
| 2,173,005,613
|
I_kwDOJ0Z1Ps6BhW8t
| 2,971
|
Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:49192->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.
|
{
"login": "yxl23",
"id": 115678682,
"node_id": "U_kgDOBuUd2g",
"avatar_url": "https://avatars.githubusercontent.com/u/115678682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yxl23",
"html_url": "https://github.com/yxl23",
"followers_url": "https://api.github.com/users/yxl23/followers",
"following_url": "https://api.github.com/users/yxl23/following{/other_user}",
"gists_url": "https://api.github.com/users/yxl23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yxl23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yxl23/subscriptions",
"organizations_url": "https://api.github.com/users/yxl23/orgs",
"repos_url": "https://api.github.com/users/yxl23/repos",
"events_url": "https://api.github.com/users/yxl23/events{/privacy}",
"received_events_url": "https://api.github.com/users/yxl23/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-03-07T05:44:10
| 2024-09-02T07:59:30
| 2024-03-12T02:12:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:49192->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2971/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2764
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2764/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2764/comments
|
https://api.github.com/repos/ollama/ollama/issues/2764/events
|
https://github.com/ollama/ollama/issues/2764
| 2,153,989,598
|
I_kwDOJ0Z1Ps6AY0Xe
| 2,764
|
Suggestion: Add a timeout parameter to Chat and Generation calls.
|
{
"login": "dezoito",
"id": 6494010,
"node_id": "MDQ6VXNlcjY0OTQwMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6494010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dezoito",
"html_url": "https://github.com/dezoito",
"followers_url": "https://api.github.com/users/dezoito/followers",
"following_url": "https://api.github.com/users/dezoito/following{/other_user}",
"gists_url": "https://api.github.com/users/dezoito/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dezoito/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dezoito/subscriptions",
"organizations_url": "https://api.github.com/users/dezoito/orgs",
"repos_url": "https://api.github.com/users/dezoito/repos",
"events_url": "https://api.github.com/users/dezoito/events{/privacy}",
"received_events_url": "https://api.github.com/users/dezoito/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-02-26T11:51:29
| 2024-02-26T11:52:22
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
First of all thanks for the hard work you guys are putting into this!
I don't think there's an easy way to do this directly... please correct me if I'm wrong.
(Looks like Ollama-py implements this for sync calls, but it is passed to the `httpx` client, and not to the Ollama host.)
The motivation is to allow production apps to drop programmatically drop requests that are taking too long, freeing up resources, and allow client libs (like Ollama-rs and Ollama-js), to "pass through" this parameter to the ollama host, simplifying implementation.
Thoughts?
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2764/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2764/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/3747
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3747/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3747/comments
|
https://api.github.com/repos/ollama/ollama/issues/3747/events
|
https://github.com/ollama/ollama/issues/3747
| 2,252,117,314
|
I_kwDOJ0Z1Ps6GPJVC
| 3,747
|
Support `XLMRobertaModel` architecture
|
{
"login": "wouterverduin",
"id": 10314050,
"node_id": "MDQ6VXNlcjEwMzE0MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/10314050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wouterverduin",
"html_url": "https://github.com/wouterverduin",
"followers_url": "https://api.github.com/users/wouterverduin/followers",
"following_url": "https://api.github.com/users/wouterverduin/following{/other_user}",
"gists_url": "https://api.github.com/users/wouterverduin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wouterverduin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wouterverduin/subscriptions",
"organizations_url": "https://api.github.com/users/wouterverduin/orgs",
"repos_url": "https://api.github.com/users/wouterverduin/repos",
"events_url": "https://api.github.com/users/wouterverduin/events{/privacy}",
"received_events_url": "https://api.github.com/users/wouterverduin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 20
| 2024-04-19T05:28:04
| 2024-10-17T09:21:03
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi all from Ollama!
First off: Great work with Ollama, keep up the good work!
What i am missing though is models in different languages (dutch for me personally). Is it possible to add multilingual embeddings like "intfloat/multilingual-e5-large-instruct"?
If there is a way to do this myself i would love the directions!
Thanks in advance!
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3747/reactions",
"total_count": 20,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 9
}
|
https://api.github.com/repos/ollama/ollama/issues/3747/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/562
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/562/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/562/comments
|
https://api.github.com/repos/ollama/ollama/issues/562/events
|
https://github.com/ollama/ollama/pull/562
| 1,905,906,733
|
PR_kwDOJ0Z1Ps5a1Ssn
| 562
|
fix OLLAMA_HOST parsing for ip6
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-09-21T00:59:40
| 2023-09-21T02:54:48
| 2023-09-21T02:54:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/562",
"html_url": "https://github.com/ollama/ollama/pull/562",
"diff_url": "https://github.com/ollama/ollama/pull/562.diff",
"patch_url": "https://github.com/ollama/ollama/pull/562.patch",
"merged_at": "2023-09-21T02:54:47"
}
|
Fix the environment parsing for `OLLAMA_HOST` so it can recognize ipv6 addresses, e.g. ipv6 loopback `[::1]:11434`
Some examples:
Default
```
$ OLLAMA_HOST='' ollama serve
2023/09/20 17:55:23 routes.go:540: Listening on 127.0.0.1:11434
```
IPv6 loopback
```
$ OLLAMA_HOST='[::1]:11434' ollama serve
2023/09/20 17:58:08 routes.go:540: Listening on [::1]:11434
```
Random port (any IPv4 & IPv6 address)
```
$ OLLAMA_HOST=':0' ollama serve
2023/09/20 17:58:26 routes.go:540: Listening on [::]:63574
```
Only IPv4
```
$ OLLAMA_HOST='127.0.0.1:12345' ollama serve
2023/09/20 17:58:37 routes.go:540: Listening on 127.0.0.1:12345
```
Only IPv6
```
$ OLLAMA_HOST='[::1]:12345' ollama serve
2023/09/20 17:59:23 routes.go:540: Listening on [::1]:12345
```
Only setting the address
```
$ OLLAMA_HOST='0.0.0.0' ollama serve
2023/09/20 18:54:18 routes.go:540: Listening on [::]:11434
```
It also removes `OLLAMA_PORT`
Resolves #560
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/562/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/426
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/426/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/426/comments
|
https://api.github.com/repos/ollama/ollama/issues/426/events
|
https://github.com/ollama/ollama/pull/426
| 1,868,245,067
|
PR_kwDOJ0Z1Ps5Y21z6
| 426
|
set default template
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-26T19:21:52
| 2023-08-26T21:15:39
| 2023-08-26T21:15:38
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/426",
"html_url": "https://github.com/ollama/ollama/pull/426",
"diff_url": "https://github.com/ollama/ollama/pull/426.diff",
"patch_url": "https://github.com/ollama/ollama/pull/426.patch",
"merged_at": "2023-08-26T21:15:38"
}
|
fixes #413
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/426/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3689
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3689/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3689/comments
|
https://api.github.com/repos/ollama/ollama/issues/3689/events
|
https://github.com/ollama/ollama/issues/3689
| 2,247,272,662
|
I_kwDOJ0Z1Ps6F8qjW
| 3,689
|
Aya by Cohere - mt5-xxl arch
|
{
"login": "oliviermills",
"id": 6075303,
"node_id": "MDQ6VXNlcjYwNzUzMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6075303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliviermills",
"html_url": "https://github.com/oliviermills",
"followers_url": "https://api.github.com/users/oliviermills/followers",
"following_url": "https://api.github.com/users/oliviermills/following{/other_user}",
"gists_url": "https://api.github.com/users/oliviermills/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliviermills/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliviermills/subscriptions",
"organizations_url": "https://api.github.com/users/oliviermills/orgs",
"repos_url": "https://api.github.com/users/oliviermills/repos",
"events_url": "https://api.github.com/users/oliviermills/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliviermills/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
open
| false
| null |
[] | null | 3
| 2024-04-17T03:17:43
| 2024-08-31T08:37:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
https://huggingface.co/CohereForAI/aya-101
See discussion re t5 and gguf attempts here: https://huggingface.co/CohereForAI/aya-101/discussions/12
trial: https://huggingface.co/kcoopermiller/aya-101-GGUF (using candle)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3689/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1276
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1276/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1276/comments
|
https://api.github.com/repos/ollama/ollama/issues/1276/events
|
https://github.com/ollama/ollama/issues/1276
| 2,010,952,287
|
I_kwDOJ0Z1Ps533LJf
| 1,276
|
A way to reuse ollama API with RAG app.
|
{
"login": "sandangel",
"id": 22189661,
"node_id": "MDQ6VXNlcjIyMTg5NjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/22189661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sandangel",
"html_url": "https://github.com/sandangel",
"followers_url": "https://api.github.com/users/sandangel/followers",
"following_url": "https://api.github.com/users/sandangel/following{/other_user}",
"gists_url": "https://api.github.com/users/sandangel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sandangel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sandangel/subscriptions",
"organizations_url": "https://api.github.com/users/sandangel/orgs",
"repos_url": "https://api.github.com/users/sandangel/repos",
"events_url": "https://api.github.com/users/sandangel/events{/privacy}",
"received_events_url": "https://api.github.com/users/sandangel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-11-26T08:33:24
| 2024-09-04T03:27:28
| 2024-09-04T03:27:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, I would like to build an RAG app, but instead of having its own API, I like to reuse the Ollama existing API so that it will work with many existing clients. Is there a way to do that?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1276/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1742
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1742/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1742/comments
|
https://api.github.com/repos/ollama/ollama/issues/1742/events
|
https://github.com/ollama/ollama/issues/1742
| 2,060,697,690
|
I_kwDOJ0Z1Ps5608Ba
| 1,742
|
Low VRAM mode?
|
{
"login": "JumboTortoise",
"id": 117757133,
"node_id": "U_kgDOBwTUzQ",
"avatar_url": "https://avatars.githubusercontent.com/u/117757133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JumboTortoise",
"html_url": "https://github.com/JumboTortoise",
"followers_url": "https://api.github.com/users/JumboTortoise/followers",
"following_url": "https://api.github.com/users/JumboTortoise/following{/other_user}",
"gists_url": "https://api.github.com/users/JumboTortoise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JumboTortoise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JumboTortoise/subscriptions",
"organizations_url": "https://api.github.com/users/JumboTortoise/orgs",
"repos_url": "https://api.github.com/users/JumboTortoise/repos",
"events_url": "https://api.github.com/users/JumboTortoise/events{/privacy}",
"received_events_url": "https://api.github.com/users/JumboTortoise/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-12-30T09:51:32
| 2024-07-03T18:12:05
| 2023-12-30T11:15:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have a 12GB RTX 3060 that can easily run 7B models, but fails on the larger ones. Does ollama have a low-vram mode? Any way to move model layers from VRAM to system RAM? I would really like to try out larger LLM's without having to rent a cloud compute server or buy a new GPU, even if it is much slower due to inference optimizations.
I am not very knowledgeable on the subject, but maybe using DeepSpeed for boosting inference performance is a possibility?
|
{
"login": "JumboTortoise",
"id": 117757133,
"node_id": "U_kgDOBwTUzQ",
"avatar_url": "https://avatars.githubusercontent.com/u/117757133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JumboTortoise",
"html_url": "https://github.com/JumboTortoise",
"followers_url": "https://api.github.com/users/JumboTortoise/followers",
"following_url": "https://api.github.com/users/JumboTortoise/following{/other_user}",
"gists_url": "https://api.github.com/users/JumboTortoise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JumboTortoise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JumboTortoise/subscriptions",
"organizations_url": "https://api.github.com/users/JumboTortoise/orgs",
"repos_url": "https://api.github.com/users/JumboTortoise/repos",
"events_url": "https://api.github.com/users/JumboTortoise/events{/privacy}",
"received_events_url": "https://api.github.com/users/JumboTortoise/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1742/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2157
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2157/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2157/comments
|
https://api.github.com/repos/ollama/ollama/issues/2157/events
|
https://github.com/ollama/ollama/issues/2157
| 2,096,205,436
|
I_kwDOJ0Z1Ps588Y58
| 2,157
|
Incoherent latency on ARM machine
|
{
"login": "racso-dev",
"id": 51890236,
"node_id": "MDQ6VXNlcjUxODkwMjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/51890236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/racso-dev",
"html_url": "https://github.com/racso-dev",
"followers_url": "https://api.github.com/users/racso-dev/followers",
"following_url": "https://api.github.com/users/racso-dev/following{/other_user}",
"gists_url": "https://api.github.com/users/racso-dev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/racso-dev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/racso-dev/subscriptions",
"organizations_url": "https://api.github.com/users/racso-dev/orgs",
"repos_url": "https://api.github.com/users/racso-dev/repos",
"events_url": "https://api.github.com/users/racso-dev/events{/privacy}",
"received_events_url": "https://api.github.com/users/racso-dev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-01-23T14:24:42
| 2024-01-25T10:13:06
| 2024-01-25T10:13:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I deployed mistral:7b on an ARM instance of Scaleway, with 32 vCPUs and 128GB of memory. I can't figure out why the inference times are on the order of several minutes and was wondering if you had any idea of the cause of the problem, and a potential solution.
For the record, I installed ollama via `curl https://ollama.ai/install.sh | sh`
And if you need more details about the machine I used, It's the biggest ARM instance available on Scaleway, the COPARM1-32-128G instance. You can find more information [here](https://www.scaleway.com/en/cost-optimized-instances-based-on-arm/).
I also tried bigger models, and one thing I noticed, was that when my inference was running, the memory that was being used was surprisingly low, around 2GB out the 128GB available, and that out of the 32 cores available about half were used.
Would be wonderful if anyone had an idea on how to solve this!
|
{
"login": "racso-dev",
"id": 51890236,
"node_id": "MDQ6VXNlcjUxODkwMjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/51890236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/racso-dev",
"html_url": "https://github.com/racso-dev",
"followers_url": "https://api.github.com/users/racso-dev/followers",
"following_url": "https://api.github.com/users/racso-dev/following{/other_user}",
"gists_url": "https://api.github.com/users/racso-dev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/racso-dev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/racso-dev/subscriptions",
"organizations_url": "https://api.github.com/users/racso-dev/orgs",
"repos_url": "https://api.github.com/users/racso-dev/repos",
"events_url": "https://api.github.com/users/racso-dev/events{/privacy}",
"received_events_url": "https://api.github.com/users/racso-dev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2157/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8049
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8049/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8049/comments
|
https://api.github.com/repos/ollama/ollama/issues/8049/events
|
https://github.com/ollama/ollama/pull/8049
| 2,733,305,728
|
PR_kwDOJ0Z1Ps6E4KV2
| 8,049
|
Implement OLLAMA_MAX_KEEP_ALIVE environment variable
|
{
"login": "jhbruhn",
"id": 1036566,
"node_id": "MDQ6VXNlcjEwMzY1NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1036566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jhbruhn",
"html_url": "https://github.com/jhbruhn",
"followers_url": "https://api.github.com/users/jhbruhn/followers",
"following_url": "https://api.github.com/users/jhbruhn/following{/other_user}",
"gists_url": "https://api.github.com/users/jhbruhn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jhbruhn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jhbruhn/subscriptions",
"organizations_url": "https://api.github.com/users/jhbruhn/orgs",
"repos_url": "https://api.github.com/users/jhbruhn/repos",
"events_url": "https://api.github.com/users/jhbruhn/events{/privacy}",
"received_events_url": "https://api.github.com/users/jhbruhn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 4
| 2024-12-11T15:34:35
| 2024-12-13T23:53:34
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/8049",
"html_url": "https://github.com/ollama/ollama/pull/8049",
"diff_url": "https://github.com/ollama/ollama/pull/8049.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8049.patch",
"merged_at": null
}
|
In cooperative server environments, it is desirable to cap the keep_alive value of models to a maximum duration. This enables server administrators to guarantee that models are unloaded at _some_ point if they are not needed anymore.
For this, I propose to introduce the `OLLAMA_MAX_KEEP_ALIVE` environment variable, which allows the administrator to specify a maximum keep alive value at which all other requests will be capped. This additionally disallows indefinite loading of models.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8049/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3817
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3817/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3817/comments
|
https://api.github.com/repos/ollama/ollama/issues/3817/events
|
https://github.com/ollama/ollama/pull/3817
| 2,255,964,277
|
PR_kwDOJ0Z1Ps5tUfnb
| 3,817
|
docs: Update README for Lobe-chat integration.
|
{
"login": "MapleEve",
"id": 1845053,
"node_id": "MDQ6VXNlcjE4NDUwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1845053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MapleEve",
"html_url": "https://github.com/MapleEve",
"followers_url": "https://api.github.com/users/MapleEve/followers",
"following_url": "https://api.github.com/users/MapleEve/following{/other_user}",
"gists_url": "https://api.github.com/users/MapleEve/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MapleEve/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MapleEve/subscriptions",
"organizations_url": "https://api.github.com/users/MapleEve/orgs",
"repos_url": "https://api.github.com/users/MapleEve/repos",
"events_url": "https://api.github.com/users/MapleEve/events{/privacy}",
"received_events_url": "https://api.github.com/users/MapleEve/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-04-22T08:49:42
| 2024-04-23T03:19:02
| 2024-04-23T00:18:16
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3817",
"html_url": "https://github.com/ollama/ollama/pull/3817",
"diff_url": "https://github.com/ollama/ollama/pull/3817.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3817.patch",
"merged_at": "2024-04-23T00:18:16"
}
|
Added Lobe Chat for Ollama
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3817/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3622
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3622/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3622/comments
|
https://api.github.com/repos/ollama/ollama/issues/3622/events
|
https://github.com/ollama/ollama/issues/3622
| 2,241,315,880
|
I_kwDOJ0Z1Ps6Fl8Qo
| 3,622
|
Ollama fails to create models when using IQ quantized GGUFs - Error: invalid file magic
|
{
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/followers",
"following_url": "https://api.github.com/users/sammcj/following{/other_user}",
"gists_url": "https://api.github.com/users/sammcj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sammcj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sammcj/subscriptions",
"organizations_url": "https://api.github.com/users/sammcj/orgs",
"repos_url": "https://api.github.com/users/sammcj/repos",
"events_url": "https://api.github.com/users/sammcj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sammcj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 29
| 2024-04-13T05:29:51
| 2024-06-01T05:03:19
| 2024-05-23T20:21:50
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Creating a Ollama model from a standard IQ quantized GGUF fails with "Error: invalid file magic"
```
ollama create sammcj/zephyr-orpo-141b-A35b-v0.1:IQ3_XS -f Modelfile-IQ3_XS
transferring model data
creating model layer
Error: invalid file magic
```
I've tried with pre-built Ollama packages and compiling Ollama from source.
With the output here I am using the latest Ollama built from main.
#### llama.cpp and lm-studio
- Running the same GGUF directly with llama.cpp works without issue:
```
main -m zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf -ngl 99 -p 'tell me a joke'
....(truncated)
Why did the tomato turn red? Because it saw the salad dressing!
```
- And it works in LM Studio 0.2.19 without issue.
#### Model
Seems to happen with all IQ3 based models I've found.
For example, here I've tried with zephyr-orpo-141b-A35b-v0.1 at IQ3_XS
- https://huggingface.co/MaziyarPanahi/zephyr-orpo-141b-A35b-v0.1-GGUF/tree/main
#### Modelfile
```
# IQ3_X_S
FROM ./zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf
TEMPLATE """
<|system|>
{{ .System }}<|endoftext|>
<|user|>
{{ .Prompt }}<|endoftext|>
<|assistant|>
"""
PARAMETER stop "<|system|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
PARAMETER stop "</s>"
```
### What did you expect to see?
The model to be successfully imported the same as any non-IQ quant GGUF.
### Steps to reproduce
As per above
1. Download zephyr-orpo-141b-A35b-v0.1.IQ3_XS
2. You have to join the GGUFs using `gguf-split --merge <first gguf file> <output file>` as it seems Ollama doesn't support multi-file models (see log below)
3. Create a basic Modelfile
4. Run Ollama create with GGUF and Modelfile.
### Are there any recent changes that introduced the issue?
I think it's always been a problem, at least whenever I've tried it
### OS
macOS
### Architecture
arm64
### Platform
_No response_
### Ollama version
main, v0.1.31
### GPU
Apple
### GPU info
96GB M2 Max
### CPU
Apple
### Other software
#### Merge multi-part GGUF using gguf-split
```
samm-mbp ~/.cache/lm-studio/models/MaziyarPanahi/zephyr-orpo-141b-A35b-v0.1-GGUF [1] $ gguf-split --merge zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00001-of-00005.gguf zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf
gguf_merge: zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00001-of-00005.gguf -> zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf
gguf_merge: reading metadata zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00001-of-00005.gguf ...ggml_opencl: selecting platform: 'Apple'
ggml_opencl: selecting device: 'Apple M2 Max'
done
gguf_merge: reading metadata zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00002-of-00005.gguf done
gguf_merge: reading metadata zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00003-of-00005.gguf done
gguf_merge: reading metadata zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00004-of-00005.gguf done
gguf_merge: reading metadata zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00005-of-00005.gguf done
gguf_merge: writing tensors zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00001-of-00005.gguf done
gguf_merge: writing tensors zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00002-of-00005.gguf done
gguf_merge: writing tensors zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00003-of-00005.gguf done
gguf_merge: writing tensors zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00004-of-00005.gguf done
gguf_merge: writing tensors zephyr-orpo-141b-A35b-v0.1.IQ3_XS-00005-of-00005.gguf done
```
#### llama.cpp load logs (without Ollama)
```
main -m zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf -ngl 99 -p 'tell me a joke'
Log start
main: build = 1266 (ab9a3240)
main: built with Apple clang version 15.0.0 (clang-1500.3.9.4) for arm64-apple-darwin23.4.0
main: seed = 1712985554
ggml_opencl: selecting platform: 'Apple'
ggml_opencl: selecting device: 'Apple M2 Max'
llama_model_loader: loaded meta data with 30 key-value pairs and 563 tensors from zephyr-orpo-141b-A35b-v0.1.IQ3_XS.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = models--HuggingFaceH4--zephyr-orpo-14...
llama_model_loader: - kv 2: llama.block_count u32 = 56
llama_model_loader: - kv 3: llama.context_length u32 = 65536
llama_model_loader: - kv 4: llama.embedding_length u32 = 6144
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 16384
llama_model_loader: - kv 6: llama.attention.head_count u32 = 48
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.expert_count u32 = 8
llama_model_loader: - kv 11: llama.expert_used_count u32 = 2
llama_model_loader: - kv 12: general.file_type u32 = 22
llama_model_loader: - kv 13: llama.vocab_size u32 = 32000
llama_model_loader: - kv 14: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 15: tokenizer.ggml.model str = llama
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 24: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 25: tokenizer.chat_template str = {% for message in messages %}\n{% if m...
llama_model_loader: - kv 26: general.quantization_version u32 = 2
llama_model_loader: - kv 27: split.no u16 = 0
llama_model_loader: - kv 28: split.count u16 = 0
llama_model_loader: - kv 29: split.tensors.count i32 = 563
llama_model_loader: - type f32: 113 tensors
llama_model_loader: - type f16: 56 tensors
llama_model_loader: - type q8_0: 112 tensors
llama_model_loader: - type q5_K: 56 tensors
llama_model_loader: - type q6_K: 1 tensors
llama_model_loader: - type iq3_xxs: 140 tensors
llama_model_loader: - type iq3_s: 85 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 65536
llm_load_print_meta: n_embd = 6144
llm_load_print_meta: n_head = 48
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 56
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 6
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 16384
llm_load_print_meta: n_expert = 8
llm_load_print_meta: n_expert_used = 2
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 65536
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8x22B
llm_load_print_meta: model ftype = IQ3_XS - 3.3 bpw
llm_load_print_meta: model params = 140.62 B
llm_load_print_meta: model size = 54.23 GiB (3.31 BPW)
llm_load_print_meta: general.name = models--HuggingFaceH4--zephyr-orpo-141b-A35b-v0.1
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 2 '</s>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.77 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 55296.00 MiB, offs = 0
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 483.48 MiB, offs = 57636012032, (55779.86 / 73728.00)
llm_load_tensors: offloading 56 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 57/57 layers to GPU
llm_load_tensors: CPU buffer size = 80.57 MiB
llm_load_tensors: Metal buffer size = 55449.46 MiB
....................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Max
ggml_metal_init: picking default device: Apple M2 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name: Apple M2 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB
llama_kv_cache_init: CPU KV buffer size = 112.00 MiB
llama_new_context_with_model: KV self size = 112.00 MiB, K (f16): 56.00 MiB, V (f16): 56.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.12 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 134.52 MiB, (55915.88 / 73728.00)
llama_new_context_with_model: Metal compute buffer size = 134.50 MiB
llama_new_context_with_model: CPU compute buffer size = 13.01 MiB
llama_new_context_with_model: graph nodes = 2862
llama_new_context_with_model: graph splits = 114
system_info: n_threads = 8 / 12 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |
sampling:
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 2048, n_predict = -1, n_keep = 1
```
#### ollama serve logs
```
ollama serve
time=2024-04-13T15:27:47.492+10:00 level=INFO source=images.go:812 msg="total blobs: 135"
time=2024-04-13T15:27:47.716+10:00 level=INFO source=images.go:819 msg="total unused blobs removed: 2"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.PullModelHandler (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.GenerateHandler (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.ChatHandler (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.EmbeddingsHandler (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.CreateModelHandler (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.PushModelHandler (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.CopyModelHandler (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.DeleteModelHandler (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.ShowModelHandler (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.CreateBlobHandler (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.HeadBlobHandler (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.ChatHandler (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.ListModelsHandler (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.ListModelsHandler (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-04-13T15:27:47.718+10:00 level=INFO source=routes.go:1139 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-04-13T15:27:47.742+10:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/var/folders/b2/wnpx7gg566l7dq63x0h27r9r0000gn/T/ollama3558789160/runners
time=2024-04-13T15:27:47.759+10:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [metal]"
[GIN] 2024/04/13 - 15:27:50 | 200 | 41.709µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/04/13 - 15:29:01 | 201 | 44.933516875s | 127.0.0.1 | POST "/api/blobs/sha256:6f4db7fb502f25cae8604a40e18a35adcb941ca2c7783c9e7f8a7aca41711fff"
[GIN] 2024/04/13 - 15:29:20 | 200 | 19.285965125s | 127.0.0.1 | POST "/api/create"
```
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3622/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3622/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2428
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2428/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2428/comments
|
https://api.github.com/repos/ollama/ollama/issues/2428/events
|
https://github.com/ollama/ollama/pull/2428
| 2,127,317,616
|
PR_kwDOJ0Z1Ps5mfbAZ
| 2,428
|
Add typechat-cli to Terminal apps
|
{
"login": "anaisbetts",
"id": 1396,
"node_id": "MDQ6VXNlcjEzOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anaisbetts",
"html_url": "https://github.com/anaisbetts",
"followers_url": "https://api.github.com/users/anaisbetts/followers",
"following_url": "https://api.github.com/users/anaisbetts/following{/other_user}",
"gists_url": "https://api.github.com/users/anaisbetts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anaisbetts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anaisbetts/subscriptions",
"organizations_url": "https://api.github.com/users/anaisbetts/orgs",
"repos_url": "https://api.github.com/users/anaisbetts/repos",
"events_url": "https://api.github.com/users/anaisbetts/events{/privacy}",
"received_events_url": "https://api.github.com/users/anaisbetts/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-09T15:02:09
| 2024-03-25T19:05:04
| 2024-03-25T19:05:04
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2428",
"html_url": "https://github.com/ollama/ollama/pull/2428",
"diff_url": "https://github.com/ollama/ollama/pull/2428.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2428.patch",
"merged_at": "2024-03-25T19:05:04"
}
|
[typechat-cli](https://github.com/anaisbetts/typechat-cli) is a CLI interface for the [TypeChat](https://github.com/Microsoft/TypeChat) library, it's a way to guarantee that the results returned by your Ollama query return consistent JSON in a structure that you request.
TypeChat doesn't natively support Ollama, but typechat-cli uses a [fork](https://github.com/anaisbetts/typechat) with Ollama support added
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2428/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4835
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4835/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4835/comments
|
https://api.github.com/repos/ollama/ollama/issues/4835/events
|
https://github.com/ollama/ollama/issues/4835
| 2,336,049,453
|
I_kwDOJ0Z1Ps6LPUkt
| 4,835
|
Feature request: Model metadata should list capabilities
|
{
"login": "it-s",
"id": 39939,
"node_id": "MDQ6VXNlcjM5OTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/39939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/it-s",
"html_url": "https://github.com/it-s",
"followers_url": "https://api.github.com/users/it-s/followers",
"following_url": "https://api.github.com/users/it-s/following{/other_user}",
"gists_url": "https://api.github.com/users/it-s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/it-s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/it-s/subscriptions",
"organizations_url": "https://api.github.com/users/it-s/orgs",
"repos_url": "https://api.github.com/users/it-s/repos",
"events_url": "https://api.github.com/users/it-s/events{/privacy}",
"received_events_url": "https://api.github.com/users/it-s/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 1
| 2024-06-05T14:15:12
| 2024-06-05T14:24:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
As the number of different models grow it would be great to add some sort of `capabilities` property to /tags response.
This property should contain information about what kind of input a model can have:
* since ollama now also includes modes that are able to accept process images (think llava)
* some models can only be used to generate embeddings
* some models are good with code, others are better at story telling
* in the future there might be modes that can also return images as part of the response
* etc
This would insure a UI front-end is able to know whether users should be allowed to send certain types of data to a model
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4835/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7827
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7827/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7827/comments
|
https://api.github.com/repos/ollama/ollama/issues/7827/events
|
https://github.com/ollama/ollama/issues/7827
| 2,689,920,561
|
I_kwDOJ0Z1Ps6gVO4x
| 7,827
|
I hope that ollama can optimize the parallel performance of CPU computations?
|
{
"login": "CarsonJiang",
"id": 22452648,
"node_id": "MDQ6VXNlcjIyNDUyNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/22452648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CarsonJiang",
"html_url": "https://github.com/CarsonJiang",
"followers_url": "https://api.github.com/users/CarsonJiang/followers",
"following_url": "https://api.github.com/users/CarsonJiang/following{/other_user}",
"gists_url": "https://api.github.com/users/CarsonJiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CarsonJiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CarsonJiang/subscriptions",
"organizations_url": "https://api.github.com/users/CarsonJiang/orgs",
"repos_url": "https://api.github.com/users/CarsonJiang/repos",
"events_url": "https://api.github.com/users/CarsonJiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/CarsonJiang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-25T09:19:57
| 2024-12-05T10:08:00
| 2024-12-02T15:38:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When serving as the LLM provider for GraphRAG, the 100% single CPU usage is preventing full utilization of multiple GPU resources on the server.

settings.yaml
```
### This config file contains required core defaults that must be set, along with a handful of common optional settings.
### For a full list of available settings, see https://microsoft.github.io/graphrag/config/yaml/
### LLM settings ###
## There are a number of settings to tune the threading and token limits for LLM calls - check the docs.
encoding_model: cl100k_base # this needs to be matched to your model!
llm:
api_key: ollama # set this in the generated .env file
type: openai_chat # or azure_openai_chat
model: llama3.2
model_supports_json: true # recommended if this is available for your model.
# audience: "https://cognitiveservices.azure.com/.default"
api_base: http://localhost:11434/v1
request_timeout: 1800.0
max_tokens: 8000
parallelization:
stagger: 0.3
# num_threads: 50
async_mode: threaded # or asyncio
embeddings:
async_mode: threaded # or asyncio
vector_store:
type: lancedb
db_uri: 'output/lancedb'
container_name: default
overwrite: true
llm:
api_key: ollama
type: openai_embedding # or azure_openai_embedding
model: nomic-embed-text:latest
api_base: http://localhost:11434/v1
request_timeout: 1800.0
```
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7827/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7478
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7478/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7478/comments
|
https://api.github.com/repos/ollama/ollama/issues/7478/events
|
https://github.com/ollama/ollama/issues/7478
| 2,631,123,394
|
I_kwDOJ0Z1Ps6c08HC
| 7,478
|
Issue with Reinstalling Ollama: "Killed" Error on ollama serve
|
{
"login": "hosein97",
"id": 22956600,
"node_id": "MDQ6VXNlcjIyOTU2NjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/22956600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hosein97",
"html_url": "https://github.com/hosein97",
"followers_url": "https://api.github.com/users/hosein97/followers",
"following_url": "https://api.github.com/users/hosein97/following{/other_user}",
"gists_url": "https://api.github.com/users/hosein97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hosein97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hosein97/subscriptions",
"organizations_url": "https://api.github.com/users/hosein97/orgs",
"repos_url": "https://api.github.com/users/hosein97/repos",
"events_url": "https://api.github.com/users/hosein97/events{/privacy}",
"received_events_url": "https://api.github.com/users/hosein97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-11-03T10:41:51
| 2024-12-02T14:50:49
| 2024-12-02T14:50:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I was using Ollama without any issues on linux, but after removing it and reinstalling, I encountered a problem when running the `ollama serve` command. The process fails and returns a "Killed" message, and subsequently, the Ollama command seems to be removed from `/usr/bin`.
Upon running the command `ollama serve,` the following log is generated:
```
2024/11/03 13:38:30 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/user/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-11-03T13:38:30.632+03:30 level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-11-03T13:38:30.632+03:30 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-11-03T13:38:30.632+03:30 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.8)"
time=2024-11-03T13:38:30.744+03:30 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1349102374/runners
Killed
```
After this, the command ollama is no longer found in` /usr/bin`:
```
user@ubuntu:~$ ollama
-bash: /usr/bin/ollama: No such file or directory
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.8
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7478/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2205
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2205/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2205/comments
|
https://api.github.com/repos/ollama/ollama/issues/2205/events
|
https://github.com/ollama/ollama/issues/2205
| 2,101,971,611
|
I_kwDOJ0Z1Ps59SYqb
| 2,205
|
Support additional AVX instruction sets
|
{
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/followers",
"following_url": "https://api.github.com/users/ddpasa/following{/other_user}",
"gists_url": "https://api.github.com/users/ddpasa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddpasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddpasa/subscriptions",
"organizations_url": "https://api.github.com/users/ddpasa/orgs",
"repos_url": "https://api.github.com/users/ddpasa/repos",
"events_url": "https://api.github.com/users/ddpasa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddpasa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g",
"url": "https://api.github.com/repos/ollama/ollama/labels/build",
"name": "build",
"color": "006b75",
"default": false,
"description": "Issues relating to building ollama from source"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 20
| 2024-01-26T10:30:51
| 2024-12-26T14:14:12
| 2024-12-10T17:47:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I have a intel CPU that supports a number of AVX features, but most of them are not picked up when using ollama. Below is the llama.log file:
system info: AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
However, when I look at lscpu, I see that avx512 and avx512_vnni are actually supported.
I'm running on Manjaro Linux with Ollama installed from official repos. It's an Intel Core i7-1065G7 with Iris Plus G7 onboard iGPU. (the iGPU works very well with ncnn vulkan inference)
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2205/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1841
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1841/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1841/comments
|
https://api.github.com/repos/ollama/ollama/issues/1841/events
|
https://github.com/ollama/ollama/pull/1841
| 2,069,134,581
|
PR_kwDOJ0Z1Ps5jZ9my
| 1,841
|
Update README.md - Community Integrations - vscode, Sublime Text, CLI…
|
{
"login": "icebaker",
"id": 113217272,
"node_id": "U_kgDOBr-O-A",
"avatar_url": "https://avatars.githubusercontent.com/u/113217272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/icebaker",
"html_url": "https://github.com/icebaker",
"followers_url": "https://api.github.com/users/icebaker/followers",
"following_url": "https://api.github.com/users/icebaker/following{/other_user}",
"gists_url": "https://api.github.com/users/icebaker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/icebaker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/icebaker/subscriptions",
"organizations_url": "https://api.github.com/users/icebaker/orgs",
"repos_url": "https://api.github.com/users/icebaker/repos",
"events_url": "https://api.github.com/users/icebaker/events{/privacy}",
"received_events_url": "https://api.github.com/users/icebaker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-07T13:39:42
| 2024-09-05T20:24:10
| 2024-09-05T20:24:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1841",
"html_url": "https://github.com/ollama/ollama/pull/1841",
"diff_url": "https://github.com/ollama/ollama/pull/1841.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1841.patch",
"merged_at": null
}
|
:wave: I have added new integrations for CLI, Ruby, Visual Studio Code, Sublime Text, and Obsidian.
*VSCode Demonstration:
https://github.com/jmorganca/ollama/assets/113217272/e6ba9c62-56d5-401f-8b63-51407d9154bd
*CLI Demonstration:
https://github.com/jmorganca/ollama/assets/113217272/5612653b-c279-4fe7-910f-f734e26f4489
> _* The videos were edited: Typing speed accelerated by 1.5x, the delay before streaming was cut out, and the answers were accelerated by 4x._
- [Nano Bots CLI](https://github.com/icebaker/ruby-nano-bots)
- [Nano Bots for Ruby](https://github.com/icebaker/ruby-nano-bots)
- [Visual Studio Code](https://github.com/icebaker/vscode-nano-bots)
- [Sublime Text](https://github.com/icebaker/sublime-nano-bots)
- [Obsidian](https://github.com/icebaker/obsidian-nano-bots)
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1841/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/4949
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4949/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4949/comments
|
https://api.github.com/repos/ollama/ollama/issues/4949/events
|
https://github.com/ollama/ollama/pull/4949
| 2,342,245,568
|
PR_kwDOJ0Z1Ps5x49T7
| 4,949
|
Update README.md - Apple Vision Pro
|
{
"login": "gluonfield",
"id": 5672094,
"node_id": "MDQ6VXNlcjU2NzIwOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5672094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gluonfield",
"html_url": "https://github.com/gluonfield",
"followers_url": "https://api.github.com/users/gluonfield/followers",
"following_url": "https://api.github.com/users/gluonfield/following{/other_user}",
"gists_url": "https://api.github.com/users/gluonfield/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gluonfield/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gluonfield/subscriptions",
"organizations_url": "https://api.github.com/users/gluonfield/orgs",
"repos_url": "https://api.github.com/users/gluonfield/repos",
"events_url": "https://api.github.com/users/gluonfield/events{/privacy}",
"received_events_url": "https://api.github.com/users/gluonfield/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-09T13:10:21
| 2024-09-05T05:30:20
| 2024-09-05T05:30:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4949",
"html_url": "https://github.com/ollama/ollama/pull/4949",
"diff_url": "https://github.com/ollama/ollama/pull/4949.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4949.patch",
"merged_at": "2024-09-05T05:30:19"
}
|
Added Enchanted with Apple Vision Pro support
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4949/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6944
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6944/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6944/comments
|
https://api.github.com/repos/ollama/ollama/issues/6944/events
|
https://github.com/ollama/ollama/issues/6944
| 2,546,641,051
|
I_kwDOJ0Z1Ps6Xyqib
| 6,944
|
怎么样才能将ollama打包成应用程序模块,集成到自己的单机软件中
|
{
"login": "lymgithub",
"id": 26688670,
"node_id": "MDQ6VXNlcjI2Njg4Njcw",
"avatar_url": "https://avatars.githubusercontent.com/u/26688670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lymgithub",
"html_url": "https://github.com/lymgithub",
"followers_url": "https://api.github.com/users/lymgithub/followers",
"following_url": "https://api.github.com/users/lymgithub/following{/other_user}",
"gists_url": "https://api.github.com/users/lymgithub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lymgithub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lymgithub/subscriptions",
"organizations_url": "https://api.github.com/users/lymgithub/orgs",
"repos_url": "https://api.github.com/users/lymgithub/repos",
"events_url": "https://api.github.com/users/lymgithub/events{/privacy}",
"received_events_url": "https://api.github.com/users/lymgithub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-09-25T00:29:50
| 2024-09-30T19:37:26
| 2024-09-25T20:08:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
在windows中现在安装了ollama,也成功部署了千问大模型,如何将现有的环境打包集成到自己的单机软件中呢
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6944/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6559
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6559/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6559/comments
|
https://api.github.com/repos/ollama/ollama/issues/6559/events
|
https://github.com/ollama/ollama/pull/6559
| 2,495,082,710
|
PR_kwDOJ0Z1Ps553VO7
| 6,559
|
Go server command line options support
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-08-29T16:58:25
| 2024-09-03T20:53:55
| 2024-09-03T20:53:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6559",
"html_url": "https://github.com/ollama/ollama/pull/6559",
"diff_url": "https://github.com/ollama/ollama/pull/6559.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6559.patch",
"merged_at": "2024-09-03T20:53:53"
}
|
Support for command line options for controlling resource usage such as mlock, mmap and GPU allocation. In addition, switches support to more recent LoRA formats.
|
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6559/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3967
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3967/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3967/comments
|
https://api.github.com/repos/ollama/ollama/issues/3967/events
|
https://github.com/ollama/ollama/issues/3967
| 2,266,568,778
|
I_kwDOJ0Z1Ps6HGRhK
| 3,967
|
The parameter is incorrect
|
{
"login": "nb001",
"id": 39027571,
"node_id": "MDQ6VXNlcjM5MDI3NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/39027571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nb001",
"html_url": "https://github.com/nb001",
"followers_url": "https://api.github.com/users/nb001/followers",
"following_url": "https://api.github.com/users/nb001/following{/other_user}",
"gists_url": "https://api.github.com/users/nb001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nb001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nb001/subscriptions",
"organizations_url": "https://api.github.com/users/nb001/orgs",
"repos_url": "https://api.github.com/users/nb001/repos",
"events_url": "https://api.github.com/users/nb001/events{/privacy}",
"received_events_url": "https://api.github.com/users/nb001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 11
| 2024-04-26T22:15:56
| 2025-01-28T08:22:02
| 2024-05-02T18:22:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
No matter which model I run, I always end up with: The parameter is incorrect.


OS: windows 10
GPU: Radeon RX 560 Series
### OS
Windows
### GPU
AMD
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3967/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2206
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2206/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2206/comments
|
https://api.github.com/repos/ollama/ollama/issues/2206/events
|
https://github.com/ollama/ollama/issues/2206
| 2,102,007,528
|
I_kwDOJ0Z1Ps59Shbo
| 2,206
|
Mixtral model issue ?
|
{
"login": "Nuranto",
"id": 1633463,
"node_id": "MDQ6VXNlcjE2MzM0NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1633463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nuranto",
"html_url": "https://github.com/Nuranto",
"followers_url": "https://api.github.com/users/Nuranto/followers",
"following_url": "https://api.github.com/users/Nuranto/following{/other_user}",
"gists_url": "https://api.github.com/users/Nuranto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nuranto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nuranto/subscriptions",
"organizations_url": "https://api.github.com/users/Nuranto/orgs",
"repos_url": "https://api.github.com/users/Nuranto/repos",
"events_url": "https://api.github.com/users/Nuranto/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nuranto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-01-26T10:52:21
| 2024-05-10T01:05:33
| 2024-05-10T01:05:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello,
I wanted to test mixtral model, so I did `ollama run mixtral`
But after saying Hello, the model answers me :
```
Hello! Welcome to Bra****op.AI. How can I assist you today?
[...]
```
It's like ollama made me downloaded some fine-tuned model ?
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2206/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3434
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3434/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3434/comments
|
https://api.github.com/repos/ollama/ollama/issues/3434/events
|
https://github.com/ollama/ollama/issues/3434
| 2,217,867,046
|
I_kwDOJ0Z1Ps6EMfcm
| 3,434
|
Error: pull model manifest
|
{
"login": "taoxin2988",
"id": 47534353,
"node_id": "MDQ6VXNlcjQ3NTM0MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/47534353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taoxin2988",
"html_url": "https://github.com/taoxin2988",
"followers_url": "https://api.github.com/users/taoxin2988/followers",
"following_url": "https://api.github.com/users/taoxin2988/following{/other_user}",
"gists_url": "https://api.github.com/users/taoxin2988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taoxin2988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taoxin2988/subscriptions",
"organizations_url": "https://api.github.com/users/taoxin2988/orgs",
"repos_url": "https://api.github.com/users/taoxin2988/repos",
"events_url": "https://api.github.com/users/taoxin2988/events{/privacy}",
"received_events_url": "https://api.github.com/users/taoxin2988/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-01T09:03:29
| 2024-04-02T09:37:29
| 2024-04-02T03:00:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I‘ve tried the methods from the FAQ, but it does not work, some advices or guidance will appreciated, Here are the commands I executed and the results
[root@Installer ~]# ollama run qwen:0.5b
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen/manifests/0.5b": dial tcp: lookup registry.ollama.ai: i/o timeout
|
{
"login": "taoxin2988",
"id": 47534353,
"node_id": "MDQ6VXNlcjQ3NTM0MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/47534353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taoxin2988",
"html_url": "https://github.com/taoxin2988",
"followers_url": "https://api.github.com/users/taoxin2988/followers",
"following_url": "https://api.github.com/users/taoxin2988/following{/other_user}",
"gists_url": "https://api.github.com/users/taoxin2988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taoxin2988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taoxin2988/subscriptions",
"organizations_url": "https://api.github.com/users/taoxin2988/orgs",
"repos_url": "https://api.github.com/users/taoxin2988/repos",
"events_url": "https://api.github.com/users/taoxin2988/events{/privacy}",
"received_events_url": "https://api.github.com/users/taoxin2988/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3434/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2950
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2950/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2950/comments
|
https://api.github.com/repos/ollama/ollama/issues/2950/events
|
https://github.com/ollama/ollama/issues/2950
| 2,171,079,206
|
I_kwDOJ0Z1Ps6BaAom
| 2,950
|
When I access it through the API, ollama crashes with an 'out of memory' error,while I use the gemma-7b model. but it works fine when I use 'ollama run gemma' in Terminal
|
{
"login": "panp1",
"id": 107453035,
"node_id": "U_kgDOBmeaaw",
"avatar_url": "https://avatars.githubusercontent.com/u/107453035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/panp1",
"html_url": "https://github.com/panp1",
"followers_url": "https://api.github.com/users/panp1/followers",
"following_url": "https://api.github.com/users/panp1/following{/other_user}",
"gists_url": "https://api.github.com/users/panp1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/panp1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/panp1/subscriptions",
"organizations_url": "https://api.github.com/users/panp1/orgs",
"repos_url": "https://api.github.com/users/panp1/repos",
"events_url": "https://api.github.com/users/panp1/events{/privacy}",
"received_events_url": "https://api.github.com/users/panp1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-03-06T09:49:19
| 2024-03-12T01:18:30
| 2024-03-12T01:18:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
[GIN] 2024/03/06 - 17:37:31 | 200 | 11.9573683s | ::1 | POST "/api/chat"
[GIN] 2024/03/06 - 17:43:04 | 200 | 11.4739ms | ::1 | GET "/api/tags"
time=2024-03-06T17:43:13.345+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-06T17:43:13.345+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
time=2024-03-06T17:43:13.345+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-06T17:43:13.345+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
time=2024-03-06T17:43:13.345+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library C:\Users\Paddy\AppData\Local\Temp\ollama1734183266\cuda_v11.3\ext_server.dll
time=2024-03-06T17:43:13.349+08:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\Paddy\\AppData\\Local\\Temp\\ollama1734183266\\cuda_v11.3\\ext_server.dll"
time=2024-03-06T17:43:13.349+08:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 24 key-value pairs and 254 tensors from F:\ollama\blobs\sha256-456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a7bbb8f7aad9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma
llama_model_loader: - kv 1: general.name str = gemma-7b-it
llama_model_loader: - kv 2: gemma.context_length u32 = 8192
llama_model_loader: - kv 3: gemma.embedding_length u32 = 3072
llama_model_loader: - kv 4: gemma.block_count u32 = 28
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 24576
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 16
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 16
llama_model_loader: - kv 8: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: gemma.attention.key_length u32 = 256
llama_model_loader: - kv 10: gemma.attention.value_length u32 = 256
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - kv 23: general.file_type u32 = 2
llama_model_loader: - type f32: 57 tensors
llama_model_loader: - type q4_0: 196 tensors
llama_model_loader: - type q8_0: 1 tensors
llm_load_vocab: mismatch in special tokens definition ( 416/256000 vs 260/256000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = gemma
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 256000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 3072
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_rot = 192
llm_load_print_meta: n_embd_head_k = 256
llm_load_print_meta: n_embd_head_v = 256
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 24576
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.54 B
llm_load_print_meta: model size = 4.84 GiB (4.87 BPW)
llm_load_print_meta: general.name = gemma-7b-it
llm_load_print_meta: BOS token = 2 '<bos>'
llm_load_print_meta: EOS token = 1 '<eos>'
llm_load_print_meta: UNK token = 3 '<unk>'
llm_load_print_meta: PAD token = 0 '<pad>'
llm_load_print_meta: LF token = 227 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.19 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: CPU buffer size = 796.88 MiB
llm_load_tensors: CUDA0 buffer size = 4955.54 MiB
...........................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9, VMM: yes
llama_kv_cache_init: CUDA0 KV buffer size = 896.00 MiB
llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 11.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 506.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 6.00 MiB
llama_new_context_with_model: graph splits (measure): 2
{"function":"initialize","level":"INFO","line":433,"msg":"initializing slots","n_slots":1,"tid":"6404","timestamp":1709718196}
{"function":"initialize","level":"INFO","line":445,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"6404","timestamp":1709718196}
time=2024-03-06T17:43:16.485+08:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1565,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"25596","timestamp":1709718196}
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"25596","timestamp":1709718196}
{"function":"update_slots","level":"INFO","line":1801,"msg":"slot progression","n_past":0,"n_prompt_tokens_processed":978,"slot_id":0,"task_id":0,"tid":"25596","timestamp":1709718196}
{"function":"update_slots","level":"INFO","line":1825,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"25596","timestamp":1709718196}
CUDA error: out of memory
current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jmorg\git\ollama\llm\llama.cpp\ggml-cuda.cu:8601
cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1)
GGML_ASSERT: C:\Users\jmorg\git\ollama\llm\llama.cpp\ggml-cuda.cu:256: !"CUDA error"
Wed Mar 6 17:46:57 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 551.61 Driver Version: 551.61 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4060 ... WDDM | 00000000:01:00.0 Off | N/A |
| N/A 38C P0 20W / 115W | 0MiB / 8188MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2950/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.