content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
allowed DMs. - Signal-cli does not expose read receipts for groups. ## Reactions (message tool) - Use `message action=react` with `channel=signal`. - Targets: sender E.164 or UUID (use `uuid:` from pairing output; bare UUID works too). - `messageId` is the Signal timestamp for the message you’re reacting to. - Group re... | https://github.com/openclaw/openclaw/blob/main//docs/channels/signal.md | main | opebclaw | [
-0.040019214153289795,
-0.038410525768995285,
0.05081019550561905,
0.04892665520310402,
0.058079320937395096,
-0.011047769337892532,
0.043236128985881805,
0.0008872377220541239,
0.06418490409851074,
0.003547533182427287,
0.031264882534742355,
-0.08132963627576828,
0.0073270853608846664,
0.... | 0.077411 |
# Zalo (Bot API) Status: experimental. Direct messages only; groups coming soon per Zalo docs. ## Plugin required Zalo ships as a plugin and is not bundled with the core install. - Install via CLI: `openclaw plugins install @openclaw/zalo` - Or select \*\*Zalo\*\* during onboarding and confirm the install prompt - Deta... | https://github.com/openclaw/openclaw/blob/main//docs/channels/zalo.md | main | opebclaw | [
-0.06082324683666229,
-0.030233338475227356,
-0.11576259136199951,
0.09327936917543411,
0.005516952835023403,
-0.10892065614461899,
0.020120756700634956,
-0.03992864489555359,
-0.02565622515976429,
0.04109921306371689,
0.06065564602613449,
-0.05200396105647087,
0.051694512367248535,
0.0748... | 0.182034 |
messages**: Download and process inbound images; send images via `sendPhoto`. - **Stickers**: Logged but not fully processed (no agent response). - **Unsupported types**: Logged (e.g., messages from protected users). ## Capabilities | Feature | Status | | --------------- | ------------------------------ | | Direct mess... | https://github.com/openclaw/openclaw/blob/main//docs/channels/zalo.md | main | opebclaw | [
-0.03988805413246155,
0.00019150755542796105,
-0.06668759882450104,
0.04478810355067253,
0.08489659428596497,
-0.11002626270055771,
0.07879894226789474,
-0.02621455304324627,
-0.008263296447694302,
0.046607378870248795,
0.09001392871141434,
-0.06904908269643784,
0.0914948582649231,
0.12454... | 0.195108 |
# grammY Integration (Telegram Bot API) # Why grammY - TS-first Bot API client with built-in long-poll + webhook helpers, middleware, error handling, rate limiter. - Cleaner media helpers than hand-rolling fetch + FormData; supports all Bot API methods. - Extensible: proxy support via custom fetch, session middleware (... | https://github.com/openclaw/openclaw/blob/main//docs/channels/grammy.md | main | opebclaw | [
-0.10787191241979599,
0.015413090586662292,
0.0660376250743866,
-0.03346149995923042,
-0.03222574666142464,
-0.04770126938819885,
0.0062815723940730095,
0.006069393362849951,
-0.0000238692773564253,
0.07368473708629608,
-0.0020473299082368612,
-0.045725323259830475,
0.022885214537382126,
0... | 0.139165 |
# Chat Channels OpenClaw can talk to you on any chat app you already use. Each channel connects via the Gateway. Text is supported everywhere; media and reactions vary by channel. ## Supported channels - [WhatsApp](/channels/whatsapp) — Most popular; uses Baileys and requires QR pairing. - [Telegram](/channels/telegram... | https://github.com/openclaw/openclaw/blob/main//docs/channels/index.md | main | opebclaw | [
-0.07680819928646088,
-0.11252883821725845,
-0.00848351139575243,
-0.05270868539810181,
-0.025226378813385963,
-0.05820036679506302,
-0.011187374591827393,
0.006235943175852299,
0.04645011946558952,
0.0003839246928691864,
0.000582467473577708,
-0.054162900894880295,
-0.006551663391292095,
... | 0.091221 |
# WhatsApp (web channel) Status: WhatsApp Web via Baileys only. Gateway owns the session(s). ## Quick setup (beginner) 1. Use a \*\*separate phone number\*\* if possible (recommended). 2. Configure WhatsApp in `~/.openclaw/openclaw.json`. 3. Run `openclaw channels login` to scan the QR code (Linked Devices). 4. Start t... | https://github.com/openclaw/openclaw/blob/main//docs/channels/whatsapp.md | main | opebclaw | [
-0.058180905878543854,
-0.03821676969528198,
-0.0381285659968853,
-0.024214327335357666,
-0.05704177916049957,
-0.06289396435022354,
-0.0036474387161433697,
-0.016243478283286095,
0.04282021522521973,
0.058601539582014084,
0.02451150305569172,
-0.012964440509676933,
0.04447188600897789,
0.... | -0.05024 |
aren’t meant to send dozens of personal assistant messages. - Result: unreliable delivery and frequent blocks, so support was removed. ## Login + credentials - Login command: `openclaw channels login` (QR via Linked Devices). - Multi-account login: `openclaw channels login --account ` (`` = `accountId`). - Default acco... | https://github.com/openclaw/openclaw/blob/main//docs/channels/whatsapp.md | main | opebclaw | [
-0.11470573395490646,
0.0013541688676923513,
-0.018398378044366837,
0.012133973650634289,
-0.003056635381653905,
-0.07024487853050232,
-0.012213512323796749,
-0.003469758667051792,
0.03413216769695282,
0.035383567214012146,
0.04873665049672127,
-0.016249364241957664,
0.023723021149635315,
... | -0.005268 |
current message body with envelope. - Quoted reply context is **always appended**: ``` [Replying to +1555 id:ABC123] > [/Replying] ``` - Reply metadata also set: - `ReplyToId` = stanzaId - `ReplyToBody` = quoted body or media placeholder - `ReplyToSender` = E.164 when known - Media-only inbound messages use placeholder... | https://github.com/openclaw/openclaw/blob/main//docs/channels/whatsapp.md | main | opebclaw | [
0.0000013519288586394396,
-0.011814219877123833,
0.037816960364580154,
0.01657598465681076,
0.00806389283388853,
-0.04499891400337219,
0.04507801681756973,
0.0029920272063463926,
0.017445234581828117,
0.10087130963802338,
0.022730333730578423,
-0.03257445991039276,
0.023319697007536888,
0.... | 0.023965 |
- Gateway: `send` params include `gifPlayback: true` ## Voice notes (PTT audio) WhatsApp sends audio as **voice notes** (PTT bubble). - Best results: OGG/Opus. OpenClaw rewrites `audio/ogg` to `audio/ogg; codecs=opus`. - `[[audio_as_voice]]` is ignored for WhatsApp (audio already ships as voice note). ## Media limits +... | https://github.com/openclaw/openclaw/blob/main//docs/channels/whatsapp.md | main | opebclaw | [
-0.015888409689068794,
-0.019740119576454163,
-0.02127033844590187,
-0.06432552635669708,
0.04843354970216751,
-0.05058223754167557,
-0.0367443673312664,
0.014963764697313309,
0.0282230656594038,
0.00975065678358078,
0.016483066603541374,
-0.007247328758239746,
-0.054733265191316605,
0.013... | 0.10379 |
# Matrix (plugin) Matrix is an open, decentralized messaging protocol. OpenClaw connects as a Matrix \*\*user\*\* on any homeserver, so you need a Matrix account for the bot. Once it is logged in, you can DM the bot directly or invite it to rooms (Matrix "groups"). Beeper is a valid client option too, but it requires E... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.02396550215780735,
-0.054036710411310196,
-0.15191225707530975,
0.013719773851335049,
-0.007507911417633295,
-0.04124687612056732,
-0.03245547413825989,
-0.03840484842658043,
-0.004384357016533613,
0.025546319782733917,
0.0646720826625824,
-0.057791367173194885,
0.054376520216464996,
0.... | 0.13455 |
new store is created and the bot must be re-verified for encrypted rooms. \*\*Device verification:\*\* When E2EE is enabled, the bot will request verification from your other sessions on startup. Open Element (or another client) and approve the verification request to establish trust. Once verified, the bot can decrypt... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.04476820304989815,
-0.026874901726841927,
-0.09843111783266068,
0.04767194017767906,
-0.035194311290979385,
-0.02712729200720787,
0.02662470005452633,
-0.04479701817035675,
-0.005247902125120163,
0.06482246518135071,
0.07062234729528427,
-0.014463372528553009,
0.05163230746984482,
0.035... | 0.010973 |
`channels.matrix.dm.allowFrom`: DM allowlist (full Matrix user IDs). `open` requires `"*"`. The wizard resolves names to IDs when possible. - `channels.matrix.groupPolicy`: `allowlist | open | disabled` (default: allowlist). - `channels.matrix.groupAllowFrom`: allowlisted senders for group messages (full Matrix user ID... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
0.008055264130234718,
-0.06223972141742706,
-0.1197122186422348,
0.0015330496244132519,
-0.0022573519963771105,
-0.02371455915272236,
0.017562709748744965,
-0.08009108155965805,
-0.024831198155879974,
0.04515285789966583,
0.04550850763916969,
0.017216188833117485,
0.012201574631035328,
0.0... | 0.07519 |
# Nostr \*\*Status:\*\* Optional plugin (disabled by default). Nostr is a decentralized protocol for social networking. This channel enables OpenClaw to receive and respond to encrypted direct messages (DMs) via NIP-04. ## Install (on demand) ### Onboarding (recommended) - The onboarding wizard (`openclaw onboard`) and... | https://github.com/openclaw/openclaw/blob/main//docs/channels/nostr.md | main | opebclaw | [
0.006353162229061127,
-0.04694417491555214,
-0.06901292502880096,
0.028320876881480217,
0.025473851710557938,
-0.09580869972705841,
-0.07184608280658722,
-0.03048032894730568,
-0.005811008624732494,
-0.0017834847094491124,
0.023735078051686287,
0.0020045454148203135,
-0.012520799413323402,
... | 0.149788 |
"privateKey": "${NOSTR\_PRIVATE\_KEY}", "relays": ["ws://localhost:7777"] } } } ``` ### Manual test 1. Note the bot pubkey (npub) from logs. 2. Open a Nostr client (Damus, Amethyst, etc.). 3. DM the bot pubkey. 4. Verify the response. ## Troubleshooting ### Not receiving messages - Verify the private key is valid. - En... | https://github.com/openclaw/openclaw/blob/main//docs/channels/nostr.md | main | opebclaw | [
-0.04028762876987457,
0.004564320668578148,
-0.054674554616212845,
0.05443704128265381,
0.06482952833175659,
-0.049540307372808456,
-0.05608687922358513,
-0.028053322806954384,
0.016975075006484985,
0.07575813680887222,
0.03441033512353897,
0.011018200777471066,
0.0554664172232151,
-0.0047... | 0.134978 |
# iMessage (legacy: imsg) > \*\*Recommended:\*\* Use [BlueBubbles](/channels/bluebubbles) for new iMessage setups. > > The `imsg` channel is a legacy external-CLI integration and may be removed in a future release. Status: legacy external CLI integration. Gateway spawns `imsg rpc` (JSON-RPC over stdio). ## Quick setup ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/imessage.md | main | opebclaw | [
-0.026510240510106087,
-0.07106070220470428,
0.05410420522093773,
-0.00697728106752038,
-0.0007946208352223039,
-0.12384095042943954,
-0.002787779550999403,
-0.004716148599982262,
0.043317876756191254,
-0.0033504886087030172,
0.02970569208264351,
0.01185725349932909,
-0.038455355912446976,
... | 0.094693 |
ssh -T gateway-host imsg "$@" ``` \*\*Remote attachments:\*\* When `cliPath` points to a remote host via SSH, attachment paths in the Messages database reference files on the remote machine. OpenClaw can automatically fetch these over SCP by setting `channels.imessage.remoteHost`: ```json5 { channels: { imessage: { cli... | https://github.com/openclaw/openclaw/blob/main//docs/channels/imessage.md | main | opebclaw | [
0.011674740351736546,
-0.0005820401129312813,
0.023794926702976227,
0.03443573787808418,
0.05692547559738159,
-0.10063748061656952,
-0.03464479744434357,
-0.0022027650848031044,
0.03026917763054371,
0.004209119360893965,
0.04897010698914528,
0.004701102152466774,
0.00724699767306447,
0.062... | 0.05609 |
boundaries) before length chunking. - Media uploads are capped by `channels.imessage.mediaMaxMb` (default 16). ## Addressing / delivery targets Prefer `chat_id` for stable routing: - `chat_id:123` (preferred) - `chat_guid:...` - `chat_identifier:...` - direct handles: `imessage:+1555` / `sms:+1555` / `user@example.com`... | https://github.com/openclaw/openclaw/blob/main//docs/channels/imessage.md | main | opebclaw | [
0.02283114194869995,
-0.06655264645814896,
0.045934878289699554,
-0.05053941160440445,
0.05783121660351753,
-0.04605570062994957,
0.029530085623264313,
0.02630920149385929,
0.053292665630578995,
0.05774930119514465,
0.0814569890499115,
0.04825497791171074,
-0.003960610367357731,
0.10443096... | 0.083421 |
# Zalo Personal (unofficial) Status: experimental. This integration automates a \*\*personal Zalo account\*\* via `zca-cli`. > \*\*Warning:\*\* This is an unofficial integration and may result in account suspension/ban. Use at your own risk. ## Plugin required Zalo Personal ships as a plugin and is not bundled with the... | https://github.com/openclaw/openclaw/blob/main//docs/channels/zalouser.md | main | opebclaw | [
-0.08897373825311661,
0.02329140156507492,
-0.1003023311495781,
0.03889769688248634,
-0.042190227657556534,
-0.10765101760625839,
0.06930484622716904,
0.009903405793011189,
-0.038946446031332016,
0.0025957413017749786,
0.0595516562461853,
-0.01691700518131256,
0.09901870787143707,
0.052682... | 0.120837 |
# Telegram (Bot API) Status: production-ready for bot DMs + groups via grammY. Long-polling by default; webhook optional. ## Quick setup (beginner) 1. Create a bot with \*\*@BotFather\*\* ([direct link](https://t.me/BotFather)). Confirm the handle is exactly `@BotFather`, then copy the token. 2. Set the token: - Env: `... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.07689407467842102,
-0.010172652080655098,
0.052017126232385635,
-0.003951832186430693,
-0.037139955908060074,
-0.06360377371311188,
-0.019753742963075638,
-0.021740863099694252,
0.008262617513537407,
0.08065922558307648,
0.0034167326521128416,
-0.0522182360291481,
-0.03638661652803421,
... | 0.065534 |
stream partial replies in Telegram DMs using `sendMessageDraft`. Requirements: - Threaded Mode enabled for the bot in @BotFather (forum topic mode). - Private chat threads only (Telegram includes `message\_thread\_id` on inbound messages). - `channels.telegram.streamMode` not set to `"off"` (default: `"partial"`, `"blo... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.02649189531803131,
-0.0055556767620146275,
0.04433392733335495,
0.041401173919439316,
0.05237976834177971,
-0.08030609041452408,
-0.02248426154255867,
-0.030665472149848938,
0.04025998339056969,
0.04545031115412712,
-0.03419726341962814,
-0.021584035828709602,
-0.0018745115958154202,
0.... | 0.084793 |
Telegram to see the chat ID (negative number like `-1001234567890`). \*\*Tip:\*\* For your own user ID, DM the bot and it will reply with your user ID (pairing message), or use `/whoami` once commands are enabled. \*\*Privacy note:\*\* `@userinfobot` is a third-party bot. If you prefer, add the bot to the group, send a... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.042123451828956604,
-0.03164307028055191,
0.015635617077350616,
0.024139583110809326,
-0.006767492275685072,
-0.08701332658529282,
0.031046345829963684,
-0.009219049476087093,
0.009537074714899063,
0.029371419921517372,
0.0032467094715684652,
-0.043243736028671265,
0.0017309060785919428,
... | 0.110749 |
after 1 hour). - Approve via: - `openclaw pairing list telegram` - `openclaw pairing approve telegram ```` ` - Pairing is the default token exchange used for Telegram DMs. Details: [Pairing](/start/pairing) - `channels.telegram.allowFrom` accepts numeric user IDs (recommended) or `@username` entries. It is **not** the ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.050702761858701706,
-0.021808858960866928,
-0.017610011622309685,
0.007801905274391174,
-0.03904104605317116,
-0.08703373372554779,
-0.007961519062519073,
0.02241910994052887,
-0.0013015343574807048,
0.062193214893341064,
0.006531771272420883,
-0.050438910722732544,
0.01062302477657795,
... | 0.069781 |
Stickers are processed through the AI's vision capabilities to generate descriptions. Since the same stickers are often sent repeatedly, OpenClaw caches these descriptions to avoid redundant API calls. **How it works:** 1. **First encounter:** The sticker image is sent to the AI for vision analysis. The AI generates a ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.08073192089796066,
0.03888380900025368,
0.0064080082811415195,
0.0733967125415802,
0.08295129984617233,
-0.10628343373537064,
0.017930641770362854,
-0.021735522896051407,
0.139789417386055,
-0.05249946564435959,
0.015753192827105522,
0.0019663411658257246,
0.017777765169739723,
-0.01843... | 0.132519 |
reasoning stream is disabled. More context: [Streaming + chunking](/concepts/streaming). ## Retry policy Outbound Telegram API calls retry on transient network/429 errors with exponential backoff and jitter. Configure via `channels.telegram.retry`. See [Retry policy](/concepts/retry). ## Agent tool (messages + reaction... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.036626268178224564,
-0.054412223398685455,
0.030122920870780945,
0.010646292939782143,
0.06457263976335526,
-0.052133090794086456,
0.024426838383078575,
0.01771705597639084,
0.05178266018629074,
0.038292910903692245,
-0.014879759401082993,
-0.006691597402095795,
-0.02600618079304695,
0.... | 0.118858 |
like `/status` don't work:** - Make sure your Telegram user ID is authorized (via pairing or `channels.telegram.allowFrom`) - Commands require authorization even in groups with `groupPolicy: "open"` **Long-polling aborts immediately on Node 22+ (often with proxies/custom fetch):** - Node 22+ is stricter about `AbortSig... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
0.0238551814109087,
-0.016554757952690125,
0.05071394145488739,
-0.017386360093951225,
-0.015363574028015137,
-0.06925986707210541,
-0.07544811815023422,
-0.06294284760951996,
-0.030674617737531662,
0.07734562456607819,
-0.0105286268517375,
-0.0236319899559021,
-0.004250751342624426,
0.017... | 0.042344 |
# Nextcloud Talk (plugin) Status: supported via plugin (webhook bot). Direct messages, rooms, reactions, and markdown messages are supported. ## Plugin required Nextcloud Talk ships as a plugin and is not bundled with the core install. Install via CLI (npm registry): ```bash openclaw plugins install @openclaw/nextcloud... | https://github.com/openclaw/openclaw/blob/main//docs/channels/nextcloud-talk.md | main | opebclaw | [
-0.047624241560697556,
-0.058945804834365845,
-0.03979688510298729,
-0.044228289276361465,
0.03700914978981018,
-0.07742028683423996,
-0.06703312695026398,
-0.023344190791249275,
0.057953108102083206,
0.019230803474783897,
0.043908197432756424,
-0.0715174749493599,
0.019590629264712334,
0.... | 0.071631 |
# Google Chat (Chat API) Status: ready for DMs + spaces via Google Chat API webhooks (HTTP only). ## Quick setup (beginner) 1. Create a Google Cloud project and enable the \*\*Google Chat API\*\*. - Go to: [Google Chat API Credentials](https://console.cloud.google.com/apis/api/chat.googleapis.com/credentials) - Enable ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/googlechat.md | main | opebclaw | [
-0.10940342396497726,
-0.05663745850324631,
0.011696627363562584,
-0.031246846541762352,
-0.05471125245094299,
-0.09877172112464905,
0.011439492926001549,
-0.050806812942028046,
0.006945687346160412,
0.05116930976510048,
0.010343683883547783,
-0.0706341564655304,
0.01527366228401661,
-0.01... | -0.024389 |
(port 8443):\*\* ```bash # If bound to localhost (127.0.0.1 or 0.0.0.0): tailscale serve --bg --https 8443 http://127.0.0.1:18789 # If bound to Tailscale IP only (e.g., 100.106.161.80): tailscale serve --bg --https 8443 http://100.106.161.80:18789 ``` 3. \*\*Expose only the webhook path publicly:\*\* ```bash # If bound... | https://github.com/openclaw/openclaw/blob/main//docs/channels/googlechat.md | main | opebclaw | [
0.009524781256914139,
0.08561164885759354,
-0.034104254096746445,
-0.06004676595330238,
-0.018102655187249184,
-0.11236647516489029,
-0.032782066613435745,
-0.01878010295331478,
-0.008278192020952702,
-0.010327069088816643,
0.03516048192977905,
-0.09901659935712814,
-0.014444314874708652,
... | 0.000821 |
config get channels.googlechat ``` If it returns "Config path not found", add the configuration (see [Config highlights](#config-highlights)). 2. **Plugin not enabled**: Check plugin status: ```bash openclaw plugins list | grep googlechat ``` If it shows "disabled", add `plugins.entries.googlechat.enabled: true` to you... | https://github.com/openclaw/openclaw/blob/main//docs/channels/googlechat.md | main | opebclaw | [
-0.00868457742035389,
-0.09454257786273956,
-0.003082540351897478,
-0.0012662650551646948,
-0.005479829385876656,
-0.09134643524885178,
-0.04304429516196251,
-0.07142417132854462,
-0.030454708263278008,
-0.009752911515533924,
0.06324315071105957,
-0.054918572306632996,
-0.03318919241428375,
... | -0.050969 |
# Mattermost (plugin) Status: supported via plugin (bot token + WebSocket events). Channels, groups, and DMs are supported. Mattermost is a self-hostable team messaging platform; see the official site at [mattermost.com](https://mattermost.com) for product details and downloads. ## Plugin required Mattermost ships as a... | https://github.com/openclaw/openclaw/blob/main//docs/channels/mattermost.md | main | opebclaw | [
-0.005706494674086571,
-0.1072135716676712,
-0.06538622826337814,
0.042759038507938385,
0.03524088114500046,
-0.10614099353551865,
-0.016280392184853554,
0.018887948244810104,
0.007166416384279728,
0.014995408244431019,
0.04962120205163956,
-0.046900808811187744,
-0.023955857381224632,
0.0... | 0.105313 |
# Discord (Bot API) Status: ready for DM and guild text channels via the official Discord bot gateway. ## Quick setup (beginner) 1. Create a Discord bot and copy the bot token. 2. In the Discord app settings, enable \*\*Message Content Intent\*\* (and \*\*Server Members Intent\*\* if you plan to use allowlists or name ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.05725705251097679,
-0.10878714174032211,
-0.019174717366695404,
0.011126385070383549,
-0.041606418788433075,
-0.03714622184634209,
0.005116064567118883,
0.01681920327246189,
0.013928879983723164,
0.01962086372077465,
0.001052610226906836,
-0.022921893745660782,
0.008275113068521023,
0.0... | 0.062516 |
isolated session keys (`agent::discord:slash:`) rather than the shared `main` session. Note: Name → id resolution uses guild member search and requires Server Members Intent; if the bot can’t search members, use ids or `<@id>` mentions. Note: Slugs are lowercase with spaces replaced by `-`. Channel names are slugged wi... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.03875581547617912,
-0.010599755682051182,
-0.037344496697187424,
0.012865968979895115,
-0.015143903903663158,
-0.03270871937274933,
0.021311843767762184,
0.004881676286458969,
-0.0034001157619059086,
0.02923022210597992,
-0.002813871018588543,
-0.021876931190490723,
0.05448125675320625,
... | 0.103513 |
as mentions for guild messages. - Multi-agent override: set per-agent patterns on `agents.list[].groupChat.mentionPatterns`. - If `channels` is present, any channel not listed is denied by default. - Use a `"*"` channel entry to apply defaults across all channels; explicit channel entries override the wildcard. - Threa... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
0.0011822348460555077,
-0.09795238822698593,
0.0005498864920809865,
0.005901043768972158,
-0.022558337077498436,
-0.011730784550309181,
0.03126395493745804,
-0.05496307834982872,
0.001146793132647872,
0.011785801500082016,
-0.0017156078247353435,
-0.06648535281419754,
0.07601626962423325,
... | 0.057153 |
Discord `retry_after` when available, with exponential backoff and jitter. Configure via `channels.discord.retry`. See [Retry policy](/concepts/retry). ## Config ```json5 { channels: { discord: { enabled: true, token: "abc.123", groupPolicy: "allowlist", guilds: { "*": { channels: { general: { allow: true }, }, }, }, m... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.05079291760921478,
-0.07037555426359177,
0.06101891025900841,
0.045153189450502396,
0.004834258463233709,
-0.023209379985928535,
0.041715990751981735,
-0.08233651518821716,
0.018383821472525597,
-0.005514224525541067,
0.0164234209805727,
0.041651833802461624,
-0.04565059021115303,
0.028... | 0.082228 |
`false` to disable). - `reactions` (covers react + read reactions) - `stickers`, `emojiUploads`, `stickerUploads`, `polls`, `permissions`, `messages`, `threads`, `pins`, `search` - `memberInfo`, `roleInfo`, `channelInfo`, `voiceStatus`, `events` - `channels` (create/edit/delete channels + categories + permissions) - `r... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.06398756802082062,
-0.06230982393026352,
0.011523043736815453,
0.04705267399549484,
0.0829024538397789,
0.005254737567156553,
0.03817960247397423,
0.029979484155774117,
0.0013771330704912543,
0.039799243211746216,
0.060923654586076736,
-0.04524540901184082,
-0.013805175200104713,
0.0590... | 0.213358 |
channels in the allowlisted guild are allowed. - To allow **no channels**, set `channels.discord.groupPolicy: "disabled"` (or keep an empty allowlist). - The configure wizard accepts `Guild/Channel` names (public + private) and resolves them to IDs when possible. - On startup, OpenClaw resolves channel/user names in al... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.012477128766477108,
-0.097713902592659,
-0.08864704519510269,
0.012409129180014133,
-0.00911477580666542,
-0.03676673769950867,
-0.015709655359387398,
-0.05365040525794029,
-0.027944594621658325,
-0.023910164833068848,
0.01694699190557003,
-0.010197379626333714,
0.006792859639972448,
0.... | 0.068416 |
# LINE (plugin) LINE connects to OpenClaw via the LINE Messaging API. The plugin runs as a webhook receiver on the gateway and uses your channel access token + channel secret for authentication. Status: supported via plugin. Direct messages, group chats, media, locations, Flex messages, template messages, and quick rep... | https://github.com/openclaw/openclaw/blob/main//docs/channels/line.md | main | opebclaw | [
-0.08621446788311005,
-0.01687822863459587,
-0.12222238630056381,
0.051471952348947525,
-0.03980923816561699,
-0.12319580465555191,
0.007097961846739054,
0.01806970313191414,
0.06852179020643234,
-0.00831916555762291,
0.0047349631786346436,
-0.03895566985011101,
0.0158794317394495,
-0.0190... | 0.066578 |
### Run a model ``` ollama run gemma3 ``` ### Launch integrations ``` ollama launch ``` Configure and launch external applications to use Ollama models. This provides an interactive way to set up and start integrations with supported apps. #### Supported integrations - \*\*OpenCode\*\* - Open-source coding assistant - ... | https://github.com/ollama/ollama/blob/main//docs/cli.mdx | main | ollama | [
-0.08234318345785141,
-0.06286581605672836,
-0.044265564531087875,
-0.04176494851708412,
-0.0562053844332695,
-0.06023188680410385,
-0.0465020090341568,
0.07974686473608017,
-0.046845633536577225,
-0.017451811581850052,
0.040055371820926666,
-0.08036747574806213,
0.02370145544409752,
0.003... | 0.180969 |
## Install To install Ollama, run the following command: ```shell curl -fsSL https://ollama.com/install.sh | sh ``` ## Manual install If you are upgrading from a prior version, you should remove the old libraries with `sudo rm -rf /usr/lib/ollama` first. Download and extract the package: ```shell curl -fsSL https://oll... | https://github.com/ollama/ollama/blob/main//docs/linux.mdx | main | ollama | [
-0.04092478007078171,
0.006369300186634064,
-0.037535659968853,
0.03426586836576462,
0.0032200762070715427,
-0.12913955748081207,
-0.06516725569963455,
0.027653377503156662,
-0.012914850376546383,
-0.054279450327157974,
0.02890542335808277,
-0.009943856857717037,
-0.07885291427373886,
0.02... | 0.031845 |
## Nvidia Ollama supports Nvidia GPUs with compute capability 5.0+ and driver version 531 and newer. Check your compute compatibility to see if your card is supported: [https://developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus) | Compute Capability | Family | Cards | | ------------------ | --------... | https://github.com/ollama/ollama/blob/main//docs/gpu.mdx | main | ollama | [
-0.06072494760155678,
-0.03281419724225998,
-0.008617082610726357,
0.010285579599440098,
-0.0715637132525444,
-0.07295554131269455,
-0.09122493863105774,
-0.05798002704977989,
-0.06069985404610634,
-0.05849428474903107,
-0.01377780269831419,
-0.06710223853588104,
-0.06176045164465904,
0.02... | 0.013952 |
the following AMD GPUs via the ROCm library: > \*\*NOTE:\*\* > Additional AMD GPU support is provided by the Vulkan Library - see below. ### Linux Support | Family | Cards and accelerators | | -------------- | --------------------------------------------------------------------------------------------------------------... | https://github.com/ollama/ollama/blob/main//docs/gpu.mdx | main | ollama | [
-0.009242328815162182,
-0.04292284697294235,
-0.0710827112197876,
0.028188740834593773,
0.04676514118909836,
-0.0890885442495346,
-0.08493079990148544,
0.08312798291444778,
-0.054338548332452774,
-0.07001113146543503,
-0.006145707331597805,
-0.02430565096437931,
-0.0653025358915329,
0.0564... | 0.103039 |
devices. ## Metal (Apple GPUs) Ollama supports GPU acceleration on Apple devices via the Metal API. ## Vulkan GPU Support > \*\*NOTE:\*\* > Vulkan is currently an Experimental feature. To enable, you must set OLLAMA\_VULKAN=1 for the Ollama server as described in the [FAQ](faq#how-do-i-configure-ollama-server) Addition... | https://github.com/ollama/ollama/blob/main//docs/gpu.mdx | main | ollama | [
-0.024971840903162956,
0.0011717317393049598,
-0.010328173637390137,
0.07082479447126389,
-0.08436375111341476,
-0.10711783915758133,
-0.07612698525190353,
0.011942468583583832,
-0.02788693644106388,
-0.10660244524478912,
-0.0026725654024630785,
-0.04558572173118591,
-0.0767219141125679,
-... | 0.137097 |
[Ollama](https://ollama.com) is the easiest way to get up and running with large language models such as gpt-oss, Gemma 3, DeepSeek-R1, Qwen3 and more. Get up and running with your first model or integrate Ollama with your favorite tools Download Ollama on macOS, Windows or Linux Ollama's cloud models offer larger mode... | https://github.com/ollama/ollama/blob/main//docs/index.mdx | main | ollama | [
-0.09046205133199692,
-0.10440309345722198,
-0.013958445750176907,
0.008966258727014065,
0.0052662501111626625,
-0.06417570263147354,
-0.04893427714705467,
0.05227525904774666,
-0.020333636552095413,
-0.04459034651517868,
-0.027149079367518425,
0.016569748520851135,
-0.0753353163599968,
0.... | 0.1588 |
## CPU only ```shell docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ``` ## Nvidia GPU Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installation). ### Install with Apt 1. Configure the repository ```sh... | https://github.com/ollama/ollama/blob/main//docs/docker.mdx | main | ollama | [
-0.018310999497771263,
0.04482314735651016,
-0.034718479961156845,
0.021612994372844696,
-0.013261478394269943,
-0.10679537802934647,
-0.055348314344882965,
-0.03146573156118393,
-0.020187778398394585,
0.0021899347193539143,
-0.015735596418380737,
-0.06658294796943665,
-0.06430896371603012,
... | -0.014173 |
## Table of Contents - [Importing a Safetensors adapter](#Importing-a-fine-tuned-adapter-from-Safetensors-weights) - [Importing a Safetensors model](#Importing-a-model-from-Safetensors-weights) - [Importing a GGUF file](#Importing-a-GGUF-based-model-or-adapter) - [Sharing models on ollama.com](#Sharing-your-model-on-ol... | https://github.com/ollama/ollama/blob/main//docs/import.mdx | main | ollama | [
-0.051124345511198044,
-0.039602912962436676,
-0.10002903640270233,
0.0024622101336717606,
0.02015787921845913,
-0.11493401229381561,
0.006706262938678265,
0.07728482037782669,
-0.10038064420223236,
-0.0363725870847702,
0.04753691330552101,
-0.0745464563369751,
0.025602364912629128,
-0.016... | 0.067607 |
Ollama can quantize FP16 and FP32 based models into different quantization levels using the `-q/--quantize` flag with the `ollama create` command. First, create a Modelfile with the FP16 or FP32 based model you wish to quantize. ```dockerfile FROM /path/to/my/gemma/f16/model ``` Use `ollama create` to then create the q... | https://github.com/ollama/ollama/blob/main//docs/import.mdx | main | ollama | [
0.041918907314538956,
0.01614234410226345,
-0.031969111412763596,
-0.02077670395374298,
-0.04269033297896385,
-0.042631663382053375,
-0.05411089211702347,
0.02885478176176548,
0.00807163305580616,
-0.02623891644179821,
0.06908523291349411,
-0.10996318608522415,
-0.04880409315228462,
0.0483... | 0.035231 |
Context length is the maximum number of tokens that the model has access to in memory. The default context length in Ollama is 4096 tokens. Tasks which require large context like web search, agents, and coding tools should be set to at least 64000 tokens. ## Setting context length Setting a larger context length will i... | https://github.com/ollama/ollama/blob/main//docs/context-length.mdx | main | ollama | [
-0.018829090520739555,
-0.009186171926558018,
-0.03449561074376106,
0.015096412971615791,
-0.041456256061792374,
-0.07671239227056503,
-0.03709457814693451,
0.05911104008555412,
0.028199484571814537,
-0.04018641635775566,
0.02075212076306343,
-0.009222205728292465,
-0.01228370051831007,
-0... | 0.127803 |
Ollama provides a powerful templating engine backed by Go's built-in templating engine to construct prompts for your large language model. This feature is a valuable tool to get the most out of your models. ## Basic Template Structure A basic Go template consists of three main parts: - \*\*Layout\*\*: The overall struc... | https://github.com/ollama/ollama/blob/main//docs/template.mdx | main | ollama | [
-0.003434383776038885,
0.059265945106744766,
0.019653258845210075,
0.04601706191897392,
-0.007986130192875862,
0.024331847205758095,
0.018331807106733322,
0.008220558054745197,
0.03514811396598816,
-0.06604050099849701,
-0.03548954799771309,
-0.02116565965116024,
-0.06656163185834885,
0.08... | 0.13127 |
useful for models trained to call external tools and can a powerful tool for retrieving real-time data or performing complex tasks. #### Mistral Mistral v0.3 and Mixtral 8x22B supports tool calling. ```go {{- range $index, $\_ := .Messages }} {{- if eq .Role "user" }} {{- if and (le (len (slice $.Messages $index)) 2) $... | https://github.com/ollama/ollama/blob/main//docs/template.mdx | main | ollama | [
-0.06856029480695724,
0.052831538021564484,
0.01850089244544506,
0.06856746226549149,
0.055745068937540054,
-0.049720730632543564,
0.09846999496221542,
0.0014336345484480262,
0.050857383757829666,
-0.09451466798782349,
0.04247032850980759,
-0.043739739805459976,
-0.03941496089100838,
0.043... | 0.100516 |
Ollama runs as a native Windows application, including NVIDIA and AMD Radeon GPU support. After installing Ollama for Windows, Ollama will run in the background and the `ollama` command line is available in `cmd`, `powershell` or your favorite terminal application. As usual the Ollama [API](/api) will be served on `htt... | https://github.com/ollama/ollama/blob/main//docs/windows.mdx | main | ollama | [
-0.03704298287630081,
0.009693422354757786,
-0.07612410932779312,
0.026360860094428062,
-0.012943378649652004,
-0.05005386099219322,
-0.06873771548271179,
0.025499261915683746,
-0.02805541455745697,
-0.050722528249025345,
0.0050880927592515945,
0.023842962458729744,
-0.0818113312125206,
0.... | 0.053301 |
Ollama CLI and GPU library dependencies for Nvidia. If you have an AMD GPU, also download and extract the additional ROCm package `ollama-windows-amd64-rocm.zip` into the same directory. This allows for embedding Ollama in existing applications, or running it as a system service via `ollama serve` with tools such as [N... | https://github.com/ollama/ollama/blob/main//docs/windows.mdx | main | ollama | [
-0.01743173599243164,
-0.0636441707611084,
-0.016434406861662865,
0.005545864347368479,
0.02821512520313263,
-0.09617192298173904,
-0.06157749891281128,
-0.03421977162361145,
-0.017404738813638687,
-0.09948894381523132,
0.005472645629197359,
0.004461297765374184,
-0.10825683176517487,
0.04... | 0.026716 |
## How can I upgrade Ollama? Ollama on macOS and Windows will automatically download updates. Click on the taskbar or menubar item and then click "Restart to update" to apply the update. Updates can also be installed by downloading the latest version [manually](https://ollama.com/download/). On Linux, re-run the instal... | https://github.com/ollama/ollama/blob/main//docs/faq.mdx | main | ollama | [
-0.012067528441548347,
0.02062218077480793,
-0.04767351970076561,
0.02618696168065071,
-0.013975866138935089,
-0.14365261793136597,
-0.01756652258336544,
0.04781964793801308,
-0.04830445721745491,
-0.016341740265488625,
0.003098277375102043,
-0.0019881906919181347,
-0.05778174474835396,
-0... | 0.078577 |
for model pulls, only HTTPS. Setting `HTTP\_PROXY` may interrupt client connections to the server. ### How do I use Ollama behind a proxy in Docker? The Ollama Docker container image can be configured to use a proxy by passing `-e HTTPS\_PROXY=https://proxy.example.com` when starting the container. Alternatively, the D... | https://github.com/ollama/ollama/blob/main//docs/faq.mdx | main | ollama | [
-0.06026848405599594,
0.07591983675956726,
0.0016742019215598702,
-0.01695697195827961,
-0.07622294872999191,
-0.10744725167751312,
-0.002263879869133234,
0.009165016934275627,
0.024429818615317345,
0.033972807228565216,
-0.03463258594274521,
0.04529311880469322,
-0.017183249816298485,
0.0... | 0.01189 |
acceleration in Docker? The Ollama Docker container can be configured with GPU acceleration in Linux or Windows (with WSL2). This requires the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit). See [ollama/ollama](https://hub.docker.com/r/ollama/ollama) for more details. GPU acceleration is... | https://github.com/ollama/ollama/blob/main//docs/faq.mdx | main | ollama | [
0.05103793367743492,
0.06720855832099915,
-0.008722749538719654,
0.0705873891711235,
0.056481774896383286,
-0.08864790201187134,
-0.08288747072219849,
-0.01283766608685255,
-0.008884371258318424,
-0.04198470339179039,
0.0028785276226699352,
-0.0032487884163856506,
-0.0756598711013794,
0.00... | -0.008578 |
when using CPU inference, or VRAM for GPU inference) then multiple models can be loaded at the same time. For a given model, if there is sufficient available memory when the model is loaded, it is configured to allow parallel request processing. If there is insufficient available memory to load a new model request whil... | https://github.com/ollama/ollama/blob/main//docs/faq.mdx | main | ollama | [
-0.026412680745124817,
-0.04938355088233948,
-0.044805858284235,
0.02866218239068985,
-0.03588753193616867,
-0.05086475983262062,
-0.054752666503190994,
0.0013305636821314692,
0.061925750225782394,
-0.023363562300801277,
-0.04094303771853447,
0.030763110145926476,
-0.05350824072957039,
-0.... | 0.112323 |
precision, this usually has no noticeable impact on the model's quality (recommended if not using f16). - `q4\_0` - 4-bit quantization, uses approximately 1/4 the memory of `f16` with a small-medium loss in precision that may be more noticeable at higher context sizes. How much the cache quantization impacts the model'... | https://github.com/ollama/ollama/blob/main//docs/faq.mdx | main | ollama | [
0.03750180825591087,
-0.0012581560295075178,
-0.0407605804502964,
0.011140595190227032,
-0.06737004220485687,
-0.07440006732940674,
-0.04648572579026222,
0.010541578754782677,
0.05155806615948677,
-0.015888111665844917,
0.018230222165584564,
0.03345750272274017,
-0.037526439875364304,
-0.0... | 0.005846 |
A Modelfile is the blueprint to create and share customized models using Ollama. ## Table of Contents - [Format](#format) - [Examples](#examples) - [Instructions](#instructions) - [FROM (Required)](#from-required) - [Build from existing model](#build-from-existing-model) - [Build from a Safetensors model](#build-from-a... | https://github.com/ollama/ollama/blob/main//docs/modelfile.mdx | main | ollama | [
-0.0652954950928688,
-0.010461916215717793,
-0.07122400403022766,
-0.004412225913256407,
0.02237812802195549,
0.004861124791204929,
-0.01059283409267664,
0.08594901859760284,
-0.04251429811120033,
0.020741945132613182,
0.03811251372098923,
-0.020277300849556923,
0.05950946360826492,
-0.054... | 0.089384 |
| num\_ctx | Sets the size of the context window used to generate the next token. (Default: 2048) | int | num\_ctx 4096 | | repeat\_last\_n | Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num\_ctx) | int | repeat\_last\_n 64 | | repeat\_penalty | Sets how strongly ... | https://github.com/ollama/ollama/blob/main//docs/modelfile.mdx | main | ollama | [
-0.017187684774398804,
-0.020023126155138016,
-0.025267038494348526,
0.05982105806469917,
-0.03005521558225155,
0.05990017578005791,
0.06248698756098747,
0.015257498249411583,
0.04415355622768402,
-0.044162556529045105,
-0.008790258318185806,
-0.05629337579011917,
0.10083746165037155,
-0.0... | 0.124655 |
be specified with a `FROM` instruction. If the base model is not the same as the base model that the adapter was tuned from the behaviour will be erratic. #### Safetensor adapter ``` ADAPTER ``` Currently supported Safetensor adapters: - Llama (including Llama 2, Llama 3, and Llama 3.1) - Mistral (including Mistral 1, ... | https://github.com/ollama/ollama/blob/main//docs/modelfile.mdx | main | ollama | [
-0.04271628335118294,
-0.052664823830127716,
-0.023356927558779716,
-0.018642829731106758,
0.029636552557349205,
-0.051560912281274796,
-0.012565537355840206,
0.004776342771947384,
-0.02868194878101349,
-0.03908482566475868,
0.058067481964826584,
-0.06709016114473343,
0.053125444799661636,
... | 0.069349 |
This quickstart will walk your through running your first model with Ollama. To get started, download Ollama on macOS, Windows or Linux. [Download Ollama](https://ollama.com/download) ## Run a model Open a terminal and run the command: ```sh ollama run gemma3 ``` ```sh ollama pull gemma3 ``` Lastly, chat with the model... | https://github.com/ollama/ollama/blob/main//docs/quickstart.mdx | main | ollama | [
-0.050011586397886276,
-0.06447439640760422,
0.0030131216626614332,
0.04464908316731453,
-0.05147344619035721,
-0.11914996057748795,
-0.012835482135415077,
-0.016382401809096336,
0.027275070548057556,
-0.05011850968003273,
0.060100916773080826,
-0.05256408825516701,
-0.06300715357065201,
0... | 0.133258 |
## Cloud Models Ollama's cloud models are a new kind of model in Ollama that can run without a powerful GPU. Instead, cloud models are automatically offloaded to Ollama's cloud service while offering the same capabilities as local models, making it possible to keep using your local tools while running larger models tha... | https://github.com/ollama/ollama/blob/main//docs/cloud.mdx | main | ollama | [
-0.06485563516616821,
-0.061392780393362045,
-0.030363405123353004,
0.04743262007832527,
-0.01290776114910841,
-0.10485905408859253,
-0.07925081253051758,
0.007906731218099594,
0.02147585339844227,
-0.027447396889328957,
0.002947772853076458,
0.024645134806632996,
-0.04560761898756027,
-0.... | 0.043686 |
## System Requirements \* MacOS Sonoma (v14) or newer \* Apple M series (CPU and GPU support) or x86 (CPU only) ## Filesystem Requirements The preferred method of installation is to mount the `ollama.dmg` and drag-and-drop the Ollama application to the system-wide `Applications` folder. Upon startup, the Ollama app wil... | https://github.com/ollama/ollama/blob/main//docs/macos.mdx | main | ollama | [
0.013805510476231575,
-0.03131522610783577,
-0.07331958413124084,
-0.08696793764829636,
0.004233058542013168,
-0.06313952803611755,
-0.07788905501365662,
0.0697302594780922,
-0.027629686519503593,
-0.012828673236072063,
0.02906203083693981,
-0.04792023450136185,
-0.07769902795553207,
0.015... | 0.177024 |
Sometimes Ollama may not perform as expected. One of the best ways to figure out what happened is to take a look at the logs. Find the logs on \*\*Mac\*\* by running the command: ```shell cat ~/.ollama/logs/server.log ``` On \*\*Linux\*\* systems with systemd, the logs can be found with this command: ```shell journalct... | https://github.com/ollama/ollama/blob/main//docs/troubleshooting.mdx | main | ollama | [
0.011951694265007973,
0.025225885212421417,
-0.010181025601923466,
0.0647856667637825,
0.0016688266769051552,
-0.15552686154842377,
-0.04683705046772957,
-0.03886239975690842,
0.05032465234398842,
-0.029680440202355385,
0.011829668655991554,
-0.008676005527377129,
-0.06218002736568451,
0.0... | 0.029257 |
and add `"exec-opts": ["native.cgroupdriver=cgroupfs"]` to the docker configuration. ## NVIDIA GPU Discovery When Ollama starts up, it takes inventory of the GPUs present in the system to determine compatibility and how much VRAM is available. Sometimes this discovery can fail to find your GPUs. In general, running the... | https://github.com/ollama/ollama/blob/main//docs/troubleshooting.mdx | main | ollama | [
-0.006611411459743977,
0.011120551265776157,
-0.05625554919242859,
0.055859677493572235,
-0.017256448045372963,
-0.08611318469047546,
-0.08840566128492355,
-0.010657834820449352,
-0.05008772015571594,
-0.051279470324516296,
0.00234357756562531,
-0.09265340864658356,
-0.09175977110862732,
0... | -0.006582 |
Ollama's web search API can be used to augment models with the latest information to reduce hallucinations and improve accuracy. Web search is provided as a REST API with deeper tool integrations in the Python and JavaScript libraries. This also enables models like OpenAI’s gpt-oss models to conduct long-running resear... | https://github.com/ollama/ollama/blob/main//docs/capabilities/web-search.mdx | main | ollama | [
-0.11430098116397858,
0.01571500115096569,
-0.034217625856399536,
0.07934592664241791,
-0.03103749081492424,
-0.1142132580280304,
-0.0934114083647728,
0.02261880598962307,
-0.016371874138712883,
-0.06611230969429016,
-0.008288624696433544,
0.03658484295010567,
0.011986950412392616,
-0.0448... | 0.091264 |
with open models\*\*\n\n[Download](https://ollama.com/download) [Explore models](https://ollama.com/models)\n\nAvailable for macOS, Windows, and Linux', links=['https://ollama.com/', 'https://ollama.com/models', 'https://github.com/ollama/ollama'] ) ``` #### JavaScript SDK ```tsx import { Ollama } from "ollama"; const ... | https://github.com/ollama/ollama/blob/main//docs/capabilities/web-search.mdx | main | ollama | [
-0.07065140455961227,
0.013745056465268135,
0.0064282952807843685,
0.07596945762634277,
-0.006786528509110212,
-0.06569711118936539,
-0.06714356690645218,
0.02949276752769947,
0.06495554000139236,
-0.04360588639974594,
0.0031580114737153053,
0.011450626887381077,
-0.03133699297904968,
-0.0... | 0.07339 |
> Add the following configuration: ```json { "mcpServers": { "web\_search\_and\_fetch": { "type": "stdio", "command": "uv", "args": ["run", "path/to/web-search-mcp.py"], "env": { "OLLAMA\_API\_KEY": "your\_api\_key\_here" } } } } ```  ### Codex Ollama works well with Ope... | https://github.com/ollama/ollama/blob/main//docs/capabilities/web-search.mdx | main | ollama | [
-0.04255656525492668,
0.01574365608394146,
-0.003144487738609314,
0.030693596228957176,
0.01153378002345562,
-0.09999629855155945,
-0.05623004585504532,
0.049540240317583084,
-0.01732780784368515,
-0.05964232236146927,
0.05466850847005844,
-0.012350518256425858,
0.04915313422679901,
0.0245... | 0.018315 |
Structured outputs let you enforce a JSON schema on model responses so you can reliably extract structured data, describe images, or keep every reply consistent. ## Generating structured JSON ```shell curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{ "model": "gpt-oss", "messages": ... | https://github.com/ollama/ollama/blob/main//docs/capabilities/structured-outputs.mdx | main | ollama | [
-0.0353856235742569,
0.07860638201236725,
0.0246784295886755,
0.06421861797571182,
0.012775180861353874,
-0.13591212034225464,
-0.003446189220994711,
-0.006492525339126587,
0.043206583708524704,
-0.06518012285232544,
-0.012981902807950974,
-0.01761712320148945,
-0.05752546340227127,
0.0904... | -0.000351 |
Streaming allows you to render text as it is produced by the model. Streaming is enabled by default through the REST API, but disabled by default in the SDKs. To enable streaming in the SDKs, set the `stream` parameter to `True`. ## Key streaming concepts 1. Chatting: Stream partial assistant messages. Each chunk inclu... | https://github.com/ollama/ollama/blob/main//docs/capabilities/streaming.mdx | main | ollama | [
-0.03611624240875244,
-0.012033410370349884,
-0.010673731565475464,
0.023737262934446335,
0.031636014580726624,
-0.0956764966249466,
0.003301618853583932,
0.0032725417986512184,
0.08369556814432144,
-0.050677552819252014,
-0.034579288214445114,
-0.04331526532769203,
-0.06273860484361649,
0... | 0.08423 |
Embeddings turn text into numeric vectors you can store in a vector database, search with cosine similarity, or use in RAG pipelines. The vector length depends on the model (typically 384–1024 dimensions). ## Recommended models - [embeddinggemma](https://ollama.com/library/embeddinggemma) - [qwen3-embedding](https://ol... | https://github.com/ollama/ollama/blob/main//docs/capabilities/embeddings.mdx | main | ollama | [
-0.02541196532547474,
0.005719808395951986,
-0.03444680571556091,
-0.0212832260876894,
-0.04352036491036415,
-0.01705872267484665,
-0.09840095788240433,
0.05137601122260094,
0.06281999498605728,
-0.03904338553547859,
0.019591188058257103,
-0.02888694405555725,
0.046031173318624496,
0.03491... | 0.041888 |
Vision models accept images alongside text so the model can describe, classify, and answer questions about what it sees. ## Quick start ```shell ollama run gemma3 ./image.png whats in this image? ``` ## Usage with Ollama's API Provide an `images` array. SDKs accept file paths, URLs or raw bytes while the REST API expec... | https://github.com/ollama/ollama/blob/main//docs/capabilities/vision.mdx | main | ollama | [
-0.009291601367294788,
0.04830854758620262,
0.018439846113324165,
0.06088364124298096,
0.03799691051244736,
-0.1305568367242813,
-0.035608939826488495,
-0.01102954987436533,
0.075391486287117,
-0.03980186954140663,
0.03214477002620697,
-0.02161143533885479,
-0.006961123552173376,
0.0586883... | 0.116425 |
Ollama supports tool calling (also known as function calling) which allows a model to invoke tools and incorporate their results into its replies. ## Calling a single tool Invoke a single tool and include its response in a follow-up request. Also known as "single-shot" tool calling. ```shell curl -s http://localhost:11... | https://github.com/ollama/ollama/blob/main//docs/capabilities/tool-calling.mdx | main | ollama | [
-0.06712688505649567,
0.06637024879455566,
-0.03370159864425659,
0.09558677673339844,
-0.060826871544122696,
-0.12835513055324554,
-0.03697986528277397,
-0.010882753878831863,
0.04811714589595795,
-0.01849771849811077,
-0.023539278656244278,
-0.06637947261333466,
-0.055387288331985474,
0.0... | 0.113585 |
"stream": false, "tools": [ { "type": "function", "function": { "name": "get\_temperature", "description": "Get the current temperature for a city", "parameters": { "type": "object", "required": ["city"], "properties": { "city": {"type": "string", "description": "The name of the city"} } } } }, { "type": "function", "f... | https://github.com/ollama/ollama/blob/main//docs/capabilities/tool-calling.mdx | main | ollama | [
-0.03177408128976822,
0.10090915113687515,
0.005042571574449539,
0.06075725331902504,
-0.0412621796131134,
-0.06675764918327332,
-0.0067392135970294476,
-0.024514010176062584,
-0.01168968714773655,
0.017954936251044273,
-0.03407605364918709,
-0.15120677649974823,
0.029727458953857422,
-0.0... | 0.017405 |
['city'], properties: { city: { type: 'string', description: 'The name of the city' }, }, }, }, } ] const messages = [{ role: 'user', content: 'What are the current weather conditions and temperature in New York and London?' }] const response = await ollama.chat({ model: 'qwen3', messages, tools, think: true }) // add ... | https://github.com/ollama/ollama/blob/main//docs/capabilities/tool-calling.mdx | main | ollama | [
-0.010962681844830513,
0.05859490856528282,
0.02152552269399166,
0.10917231440544128,
-0.05931689962744713,
-0.088807612657547,
0.04118425399065018,
-0.005799668841063976,
-0.005146120209246874,
-0.05005863308906555,
-0.04508697986602783,
-0.10115238279104233,
-0.11398738622665405,
0.07241... | 0.062033 |
of toolCalls) { const fn = availableFunctions[call.function.name as ToolName] if (!fn) { continue } const args = call.function.arguments as { a: number; b: number } console.log(`Calling ${call.function.name} with arguments`, args) const result = fn(args.a, args.b) console.log(`Result: ${result}`) messages.push({ role: ... | https://github.com/ollama/ollama/blob/main//docs/capabilities/tool-calling.mdx | main | ollama | [
-0.05039163678884506,
0.02140321023762226,
-0.03468780964612961,
0.1009778305888176,
-0.05828124284744263,
-0.09448087960481644,
0.07023123651742935,
0.020793607458472252,
0.03465805575251579,
-0.02714899182319641,
-0.03764001652598381,
-0.05112632364034653,
-0.046378348022699356,
0.014145... | 0.091705 |
temperature for a city Args: city: The name of the city Returns: The current temperature for the city """ temperatures = { 'New York': '22°C', 'London': '15°C', } return temperatures.get(city, 'Unknown') available\_functions = { 'get\_temperature': get\_temperature, } # directly pass the function as part of the tools l... | https://github.com/ollama/ollama/blob/main//docs/capabilities/tool-calling.mdx | main | ollama | [
-0.012166975066065788,
0.03021438792347908,
0.04998432472348213,
0.0878964364528656,
-0.01415405236184597,
-0.08843937516212463,
0.04956541582942009,
-0.036635514348745346,
-0.00275485054589808,
-0.003349991049617529,
0.011139800772070885,
-0.12311141192913055,
-0.05189186707139015,
0.0088... | 0.025221 |
Thinking-capable models emit a `thinking` field that separates their reasoning trace from the final answer. Use this capability to audit model steps, animate the model \*thinking\* in a UI, or hide the trace entirely when you only need the final response. ## Supported models - [Qwen 3](https://ollama.com/library/qwen3)... | https://github.com/ollama/ollama/blob/main//docs/capabilities/thinking.mdx | main | ollama | [
-0.07508975267410278,
-0.0656646266579628,
0.022271549329161644,
0.046738240867853165,
0.06038306653499603,
-0.10059650242328644,
0.005307452287524939,
0.034768957644701004,
0.05851346254348755,
-0.04474779963493347,
-0.04640760272741318,
-0.0236667487770319,
-0.05720806121826172,
-0.00333... | 0.092024 |
## Install Install [Cline](https://docs.cline.bot/getting-started/installing-cline) in your IDE. ## Usage with Ollama 1. Open Cline settings > `API Configuration` and set `API Provider` to `Ollama` 2. Select a model under `Model` or type one (e.g. `qwen3`) 3. Update the context window to at least 32K tokens under `Cont... | https://github.com/ollama/ollama/blob/main//docs/integrations/cline.mdx | main | ollama | [
-0.0784517228603363,
-0.048594363033771515,
-0.05388801544904709,
-0.020675141364336014,
-0.05848460644483566,
-0.08580130338668823,
-0.027262935414910316,
0.052396032959222794,
0.04219891130924225,
-0.02820352278649807,
-0.01994885317981243,
-0.06540525704622269,
0.016764331609010696,
-0.... | 0.079208 |
## Install Install [Zed](https://zed.dev/download). ## Usage with Ollama 1. In Zed, click the \*\*star icon\*\* in the bottom-right corner, then select \*\*Configure\*\*.2. Under \*\*LLM Providers\*\*, choose \*\*Ollama\*\* 3. Confirm the \*\*Host URL\*\* is `http://localhost:11434`, then click \*\*Connect\*\* 4. Once ... | https://github.com/ollama/ollama/blob/main//docs/integrations/zed.mdx | main | ollama | [
-0.05509162321686745,
0.024555791169404984,
-0.08123525232076645,
0.029139013960957527,
-0.03173670172691345,
-0.09510696679353714,
-0.056498415768146515,
-0.03869927302002907,
-0.021189728751778603,
-0.0019905813969671726,
0.032461125403642654,
0.0055897776037454605,
-0.028622932732105255,
... | 0.006202 |
## Install Install [XCode](https://developer.apple.com/xcode/) ## Usage with Ollama Ensure Apple Intelligence is setup and the latest XCode version is v26.0 1. Click \*\*XCode\*\* in top left corner > \*\*Settings\*\*2. Select \*\*Locally Hosted\*\*, enter port \*\*11434\*\* and click \*\*Add\*\*3. Select the \*\*star ... | https://github.com/ollama/ollama/blob/main//docs/integrations/xcode.mdx | main | ollama | [
0.009606242179870605,
-0.05962353199720383,
-0.03972768038511276,
0.033531397581100464,
-0.015707846730947495,
-0.07257094979286194,
-0.08214922249317169,
-0.04303506016731262,
-0.05667292699217796,
0.017966141924262047,
0.039113376289606094,
-0.0461801141500473,
-0.05782820284366608,
0.00... | 0.084251 |
## Overview [Onyx](http://onyx.app/) is a self-hostable Chat UI that integrates with all Ollama models. Features include: - Creating custom Agents - Web search - Deep Research - RAG over uploaded documents and connected apps - Connectors to applications like Google Drive, Email, Slack, etc. - MCP and OpenAPI Actions su... | https://github.com/ollama/ollama/blob/main//docs/integrations/onyx.mdx | main | ollama | [
-0.0641019195318222,
-0.02595948986709118,
-0.04358317703008652,
0.04094063863158226,
0.06311766803264618,
-0.1317848116159439,
-0.01078970916569233,
0.036852963268756866,
-0.004240464419126511,
-0.01868388056755066,
0.03197268769145012,
0.058095332235097885,
0.02608352340757847,
0.0066458... | 0.09382 |
## Install Install [n8n](https://docs.n8n.io/choose-n8n/). ## Using Ollama Locally 1. In the top right corner, click the dropdown and select \*\*Create Credential\*\*2. Under \*\*Add new credential\*\* select \*\*Ollama\*\*3. Confirm Base URL is set to `http://localhost:11434` if running locally or `http://host.docker.... | https://github.com/ollama/ollama/blob/main//docs/integrations/n8n.mdx | main | ollama | [
-0.02965305559337139,
0.04242273047566414,
0.00785280205309391,
0.001028086873702705,
-0.05232391878962517,
-0.11585067957639694,
-0.08425269275903702,
0.017490634694695473,
0.012829344719648361,
0.00009033867536345497,
-0.017711259424686432,
-0.06987282633781433,
-0.04042070731520653,
0.0... | 0.013948 |
OpenClaw is a personal AI assistant that runs on your own devices. It bridges messaging services (WhatsApp, Telegram, Slack, Discord, iMessage, and more) to AI coding agents through a centralized gateway. ## Install Install [OpenClaw](https://openclaw.ai/) ```bash npm install -g openclaw@latest ``` Then run the onboard... | https://github.com/ollama/ollama/blob/main//docs/integrations/openclaw.mdx | main | ollama | [
-0.033459167927503586,
-0.029750850051641464,
-0.07397385686635971,
0.020797710865736008,
-0.036590591073036194,
-0.12663677334785461,
-0.027654265984892845,
0.07457192242145538,
-0.013643329963088036,
-0.005376892630010843,
-0.011881379410624504,
-0.006398427300155163,
0.010238967835903168,... | 0.140337 |
## Install Install the [Codex CLI](https://developers.openai.com/codex/cli/): ``` npm install -g @openai/codex ``` ## Usage with Ollama Codex requires a larger context window. It is recommended to use a context window of at least 64k tokens. ### Quick setup ``` ollama launch codex ``` To configure without launching: ``... | https://github.com/ollama/ollama/blob/main//docs/integrations/codex.mdx | main | ollama | [
-0.038160692900419235,
-0.027887308970093727,
-0.06273896247148514,
0.017129071056842804,
0.016050545498728752,
-0.10051234066486359,
-0.057590022683143616,
0.07217094302177429,
0.012794909998774529,
-0.0094937514513731,
-0.005841099191457033,
-0.030467335134744644,
-0.03443796932697296,
0... | 0.059437 |
## Install Install [Roo Code](https://marketplace.visualstudio.com/items?itemName=RooVeterinaryInc.roo-cline) from the VS Code Marketplace. ## Usage with Ollama 1. Open Roo Code in VS Code and click the \*\*gear icon\*\* on the top right corner of the Roo Code window to open \*\*Provider Settings\*\* 2. Set `API Provid... | https://github.com/ollama/ollama/blob/main//docs/integrations/roo-code.mdx | main | ollama | [
-0.035075630992650986,
-0.06047536060214043,
-0.11131862550973892,
0.08358657360076904,
0.016143672168254852,
-0.0628928542137146,
-0.10625949501991272,
0.04425613582134247,
0.009191195480525494,
-0.007771492004394531,
0.011879422701895237,
-0.013420866802334785,
-0.03939807787537575,
-0.0... | 0.041292 |
Claude Code is Anthropic's agentic coding tool that can read, modify, and execute code in your working directory. Open models can be used with Claude Code through Ollama's Anthropic-compatible API, enabling you to use models such as `glm-4.7`, `qwen3-coder`, `gpt-oss`. : ```bash curl -fsSL https://app.factory.ai/cli | sh ``` Droid requires a larger context window. It is recommended to use a context window of at least 64k tokens. See [Context length](/context-length) for more information. ## Usage with Ollama ### Quick setup ```ba... | https://github.com/ollama/ollama/blob/main//docs/integrations/droid.mdx | main | ollama | [
-0.045875199139118195,
-0.005744450725615025,
0.013216057792305946,
-0.08669254183769226,
-0.05599778890609741,
-0.060302313417196274,
-0.080350860953331,
0.13761243224143982,
-0.011901482939720154,
-0.021323837339878082,
0.04985325038433075,
-0.07777708768844604,
0.0035594527143985033,
0.... | 0.041617 |
OpenCode is an open-source AI coding assistant that runs in your terminal. ## Install Install the [OpenCode CLI](https://opencode.ai): ```bash curl -fsSL https://opencode.ai/install | bash ``` OpenCode requires a larger context window. It is recommended to use a context window of at least 64k tokens. See [Context lengt... | https://github.com/ollama/ollama/blob/main//docs/integrations/opencode.mdx | main | ollama | [
0.01717148907482624,
-0.010642733424901962,
-0.10553954541683197,
0.04921763017773628,
0.004587437026202679,
-0.07154218852519989,
-0.06349549442529678,
0.06429814547300339,
0.036658164113759995,
-0.007105548866093159,
0.0028064956422895193,
-0.014858128502964973,
-0.087405726313591,
0.002... | 0.220986 |
## Install Install [marimo](https://marimo.io). You can use `pip` or `uv` for this. You can also use `uv` to create a sandboxed environment for marimo by running: ``` uvx marimo edit --sandbox notebook.py ``` ## Usage with Ollama 1. In marimo, go to the user settings and go to the AI tab. From here you can find and con... | https://github.com/ollama/ollama/blob/main//docs/integrations/marimo.mdx | main | ollama | [
-0.035762153565883636,
-0.013870704919099808,
-0.04565691947937012,
0.022677510976791382,
0.02899787202477455,
-0.043164074420928955,
-0.029234889894723892,
0.05064702406525612,
-0.01323128491640091,
-0.030154787003993988,
0.03228822350502014,
-0.031921979039907455,
-0.016420753672719002,
... | 0.207051 |
This example uses \*\*IntelliJ\*\*; same steps apply to other JetBrains IDEs (e.g., PyCharm). ## Install Install [IntelliJ](https://www.jetbrains.com/idea/). ## Usage with Ollama To use \*\*Ollama\*\*, you will need a [JetBrains AI Subscription](https://www.jetbrains.com/ai-ides/buy/?section=personal&billing=yearly). 1... | https://github.com/ollama/ollama/blob/main//docs/integrations/jetbrains.mdx | main | ollama | [
-0.08624786883592606,
-0.07114212214946747,
-0.021550962701439857,
0.029501112177968025,
-0.022332750260829926,
-0.01267119962722063,
-0.019348645582795143,
-0.006052129901945591,
0.00912946555763483,
-0.07359353452920914,
0.016597343608736992,
-0.05062275752425194,
-0.07961421459913254,
0... | 0.209175 |
## Goose Desktop Install [Goose](https://block.github.io/goose/docs/getting-started/installation/) Desktop. ### Usage with Ollama 1. In Goose, open \*\*Settings\*\* → \*\*Configure Provider\*\*.2. Find \*\*Ollama\*\*, click \*\*Configure\*\* 3. Confirm \*\*API Host\*\* is `http://localhost:11434` and click Submit ### C... | https://github.com/ollama/ollama/blob/main//docs/integrations/goose.mdx | main | ollama | [
-0.0925910696387291,
-0.0341634675860405,
-0.07219167798757553,
-0.000693978276103735,
-0.014935977756977081,
-0.08703190833330154,
-0.08675030618906021,
-0.007924468256533146,
-0.003046231111511588,
-0.060019880533218384,
0.03505503013730049,
-0.014019232243299484,
-0.08235578238964081,
-... | -0.025897 |
Certain API endpoints stream responses by default, such as `/api/generate`. These responses are provided in the newline-delimited JSON format (i.e. the `application/x-ndjson` content type). For example: ```json {"model":"gemma3","created\_at":"2025-10-26T17:15:24.097767Z","response":"That","done":false} {"model":"gemma... | https://github.com/ollama/ollama/blob/main//docs/api/streaming.mdx | main | ollama | [
-0.07212688773870468,
0.008659245446324348,
0.017529070377349854,
-0.001328409998677671,
0.0074849254451692104,
-0.04434001445770264,
-0.04873806983232498,
-0.042441800236701965,
0.06628303229808807,
-0.010079823434352875,
-0.014444796368479729,
-0.07271631807088852,
-0.053339384496212006,
... | 0.099386 |
Ollama provides compatibility with the [Anthropic Messages API](https://docs.anthropic.com/en/api/messages) to help connect existing applications to Ollama, including tools like Claude Code. ## Usage ### Environment variables To use Ollama with tools that expect the Anthropic API (like Claude Code), set these environme... | https://github.com/ollama/ollama/blob/main//docs/api/anthropic-compatibility.mdx | main | ollama | [
-0.04499582573771477,
0.043994225561618805,
-0.05849657952785492,
0.03051796555519104,
-0.03758574277162552,
-0.16142591834068298,
-0.02103067748248577,
0.011733107268810272,
0.0036353261675685644,
-0.026997709646821022,
0.02371208928525448,
-0.06102452427148819,
-0.020641889423131943,
0.0... | 0.059937 |
configured to use Ollama as its backend. ### Recommended models For coding use cases, models like `glm-4.7`, `minimax-m2.1`, and `qwen3-coder` are recommended. Download a model before use: ```shell ollama pull qwen3-coder ``` > Note: Qwen 3 coder is a 30B parameter model requiring at least 24GB of VRAM to run smoothly.... | https://github.com/ollama/ollama/blob/main//docs/api/anthropic-compatibility.mdx | main | ollama | [
-0.012919542379677296,
-0.0354795977473259,
-0.09790189564228058,
-0.008188693784177303,
-0.08926799893379211,
-0.05626726523041725,
-0.10817092657089233,
0.028676962479948997,
-0.024116262793540955,
0.05793358385562897,
0.036377549171447754,
-0.05031802877783775,
-0.054532382637262344,
-0... | 0.024457 |
| Citations | `citations` content blocks | | PDF support | `document` content blocks with PDF files | | Server-sent errors | `error` events during streaming (errors return HTTP status) | ### Partial support | Feature | Status | |---------|--------| | Image content | Base64 images supported; URL images not supported | |... | https://github.com/ollama/ollama/blob/main//docs/api/anthropic-compatibility.mdx | main | ollama | [
-0.007922690361738205,
-0.040139276534318924,
-0.09126449376344681,
-0.015732062980532646,
0.12499246001243591,
0.022677907720208168,
-0.06433746963739395,
0.018607160076498985,
-0.05065463110804558,
0.030696965754032135,
0.02505456656217575,
0.021803995594382286,
0.013350378721952438,
0.0... | 0.102533 |
No authentication is required when accessing Ollama's API locally via `http://localhost:11434`. Authentication is required for the following: \* Running cloud models via ollama.com \* Publishing models \* Downloading private models Ollama supports two authentication methods: \* \*\*Signing in\*\*: sign in from your loc... | https://github.com/ollama/ollama/blob/main//docs/api/authentication.mdx | main | ollama | [
-0.0696077048778534,
0.004247769713401794,
-0.056860484182834625,
0.012877149507403374,
-0.04502205550670624,
-0.09055876731872559,
-0.029641801491379738,
0.013479605317115784,
0.04792335256934166,
0.008692165836691856,
0.04323895275592804,
0.005462190136313438,
-0.0336129404604435,
-0.019... | 0.003807 |
## Status codes Endpoints return appropriate HTTP status codes based on the success or failure of the request in the HTTP status line (e.g. `HTTP/1.1 200 OK` or `HTTP/1.1 400 Bad Request`). Common status codes are: - `200`: Success - `400`: Bad Request (missing parameters, invalid JSON, etc.) - `404`: Not Found (model ... | https://github.com/ollama/ollama/blob/main//docs/api/errors.mdx | main | ollama | [
-0.095962293446064,
0.009458105079829693,
0.0030042240396142006,
0.012818986549973488,
-0.01874234527349472,
-0.07287000864744186,
-0.09943355619907379,
0.006417629774659872,
0.02887708507478237,
0.043462373316287994,
0.027849113568663597,
0.011737670749425888,
-0.0030125544872134924,
0.01... | 0.082424 |
Ollama's API allows you to run and interact with models programatically. ## Get started If you're just getting started, follow the [quickstart](/quickstart) documentation to get up and running with Ollama's API. ## Base URL After installation, Ollama's API is served by default at: ``` http://localhost:11434/api ``` For... | https://github.com/ollama/ollama/blob/main//docs/api/introduction.mdx | main | ollama | [
-0.11080898344516754,
-0.037587832659482956,
-0.03975200280547142,
0.06131613254547119,
-0.031963370740413666,
-0.1003287136554718,
-0.0903446152806282,
0.017714569345116615,
0.03795319050550461,
-0.041853465139865875,
0.0023487152066081762,
-0.004102776758372784,
-0.028720960021018982,
-0... | 0.091937 |
Ollama's API responses include metrics that can be used for measuring performance and model usage: \* `total\_duration`: How long the response took to generate \* `load\_duration`: How long the model took to load \* `prompt\_eval\_count`: How many input tokens were processed \* `prompt\_eval\_duration`: How long it too... | https://github.com/ollama/ollama/blob/main//docs/api/usage.mdx | main | ollama | [
-0.0674109011888504,
0.03649909049272537,
-0.05976882204413414,
0.09247732162475586,
-0.029982002452015877,
-0.11654660105705261,
-0.03588658943772316,
0.037280790507793427,
0.07701615244150162,
-0.04417886212468147,
-0.035315848886966705,
-0.07045213133096695,
-0.06484679132699966,
-0.017... | 0.160216 |
Ollama provides compatibility with parts of the [OpenAI API](https://platform.openai.com/docs/api-reference) to help connect existing applications to Ollama. ## Usage ### Simple `v1/chat/completions` example ```python basic.py from openai import OpenAI client = OpenAI( base\_url='http://localhost:11434/v1/', api\_key='... | https://github.com/ollama/ollama/blob/main//docs/api/openai-compatibility.mdx | main | ollama | [
-0.06421857327222824,
-0.01575322449207306,
0.004540615249425173,
0.04996958747506142,
0.011235358193516731,
-0.12365873903036118,
0.013716643676161766,
0.0017203079769387841,
0.04442308098077774,
-0.07857483625411987,
0.016710329800844193,
0.01876436546444893,
-0.043836917728185654,
0.080... | 0.071143 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.