content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
# Signal (signal-cli) Status: external CLI integration. Gateway talks to `signal-cli` over HTTP JSON-RPC + SSE. ## Prerequisites - OpenClaw installed on your server (Linux flow below tested on Ubuntu 24). - `signal-cli` available on the host where the gateway runs. - A phone number that can receive one verification SMS... | https://github.com/openclaw/openclaw/blob/main//docs/channels/signal.md | main | opebclaw | [
-0.08033764362335205,
-0.09024649858474731,
-0.06734716892242432,
-0.041506700217723846,
-0.06730976700782776,
-0.050332311540842056,
-0.04122479632496834,
0.03324861079454422,
-0.014531955122947693,
-0.002549496479332447,
0.06234665587544441,
-0.07247438281774521,
0.07133572548627853,
0.0... | 0.077966 |
"Open Signal". 3. Run from the same external IP as the browser session when possible. 4. Run registration again immediately (captcha tokens expire quickly): ```bash signal-cli -a + register --captcha '' signal-cli -a + verify ``` 4. Configure OpenClaw, restart gateway, verify channel: ```bash # If you run the gateway a... | https://github.com/openclaw/openclaw/blob/main//docs/channels/signal.md | main | opebclaw | [
-0.08974035084247589,
-0.07402811199426651,
-0.03202994540333748,
-0.028419356793165207,
-0.07682964950799942,
-0.10363062471151352,
-0.06032542139291763,
0.02616817131638527,
-0.019785894080996513,
-0.022838560864329338,
0.04723917692899704,
-0.07272935658693314,
0.039253272116184235,
0.0... | 0.006511 |
channel=signal target=uuid:123e4567-e89b-12d3-a456-426614174000 messageId=1737630212345 emoji=🔥 message action=react channel=signal target=+15551234567 messageId=1737630212345 emoji=🔥 remove=true message action=react channel=signal target=signal:group: targetAuthor=uuid: messageId=1737630212345 emoji=✅ ``` Config: - ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/signal.md | main | opebclaw | [
-0.04885079339146614,
-0.06860534846782684,
0.0254118200391531,
-0.01052811648696661,
0.0397496223449707,
0.003961070440709591,
0.014721897430717945,
0.04349980503320694,
0.037066180258989334,
-0.021649634465575218,
0.0458640418946743,
-0.08357302099466324,
0.02337030880153179,
0.026995472... | 0.151676 |
# Zalo (Bot API) Status: experimental. DMs are supported. The [Capabilities](#capabilities) section below reflects current Marketplace-bot behavior. ## Bundled plugin Zalo ships as a bundled plugin in current OpenClaw releases, so normal packaged builds do not need a separate install. If you are on an older build or a ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/zalo.md | main | opebclaw | [
-0.05698216333985329,
-0.04825281724333763,
-0.09613492339849472,
0.07638661563396454,
-0.014094238169491291,
-0.12207373976707458,
-0.010015547275543213,
-0.029939569532871246,
-0.047472935169935226,
0.03878330811858177,
0.06603658199310303,
-0.06547588109970093,
0.07455939799547195,
0.07... | 0.130069 |
` - Pairing is the default token exchange. Details: [Pairing](/channels/pairing) - `channels.zalo.allowFrom` accepts numeric user IDs (no username lookup available). ## Access control (Groups) For **Zalo Bot Creator / Marketplace bots**, group support was not available in practice because the bot could not be added to ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/zalo.md | main | opebclaw | [
-0.06638940423727036,
-0.04180297628045082,
-0.1428777128458023,
0.07280979305505753,
-0.024327140301465988,
-0.03574253246188164,
0.06452217698097229,
-0.03224020078778267,
-0.07404135167598724,
0.033076684921979904,
0.06055661663413048,
-0.011810339987277985,
0.08074098825454712,
0.01935... | 0.162224 |
❌ Not supported | | Native commands | ❌ Not supported | | Streaming | ⚠️ Blocked (2000 char limit) | ## Delivery targets (CLI/cron) - Use a chat id as the target. - Example: `openclaw message send --channel zalo --target 123456789 --message "hi"`. ## Troubleshooting **Bot doesn't respond:** - Check that the token is va... | https://github.com/openclaw/openclaw/blob/main//docs/channels/zalo.md | main | opebclaw | [
-0.055486034601926804,
0.005022348370403051,
-0.10189203917980194,
0.02684010937809944,
0.01438408438116312,
-0.14261749386787415,
0.005219993647187948,
-0.06438980996608734,
-0.013068639673292637,
0.021522516384720802,
0.04237806051969528,
-0.10132111608982086,
0.046735018491744995,
0.030... | 0.137746 |
# QA Channel `qa-channel` is a bundled synthetic message transport for automated OpenClaw QA. It is not a production channel. It exists to exercise the same channel plugin boundary used by real transports while keeping state deterministic and fully inspectable. ## What it does today - Slack-class target grammar: - `dm:... | https://github.com/openclaw/openclaw/blob/main//docs/channels/qa-channel.md | main | opebclaw | [
-0.045545950531959534,
-0.004312986042350531,
-0.052484165877103806,
0.03555558621883392,
-0.049960844218730927,
-0.055882420390844345,
-0.03511740639805794,
-0.020198585465550423,
0.03257334604859352,
-0.008042561821639538,
-0.0018332184990867972,
-0.03187144920229912,
-0.015170956961810589... | 0.135299 |
# Chat Channels OpenClaw can talk to you on any chat app you already use. Each channel connects via the Gateway. Text is supported everywhere; media and reactions vary by channel. ## Supported channels - [BlueBubbles](/channels/bluebubbles) — \*\*Recommended for iMessage\*\*; uses the BlueBubbles macOS server REST API ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/index.md | main | opebclaw | [
-0.03426830470561981,
-0.11958608776330948,
0.026911191642284393,
-0.04288641735911369,
0.007379045709967613,
-0.08080895990133286,
0.014667896553874016,
-0.012081526219844818,
0.048205867409706116,
0.02035866677761078,
0.03309614583849907,
-0.017694080248475075,
-0.0005380017682909966,
0.... | 0.159641 |
# WhatsApp (Web channel) Status: production-ready via WhatsApp Web (Baileys). Gateway owns linked session(s). ## Install (on demand) - Onboarding (`openclaw onboard`) and `openclaw channels add --channel whatsapp` prompt to install the WhatsApp plugin the first time you select it. - `openclaw channels login --channel w... | https://github.com/openclaw/openclaw/blob/main//docs/channels/whatsapp.md | main | opebclaw | [
-0.03624248877167702,
-0.08203458040952682,
-0.013218721374869347,
-0.017397111281752586,
-0.005503502208739519,
-0.009700332768261433,
-0.027434958145022392,
-0.009530031122267246,
0.021587712690234184,
0.10887216031551361,
0.045090269297361374,
0.04001110792160034,
-0.047059159725904465,
... | 0.021535 |
before mention/reply activation Note: if no `channels.whatsapp` block exists at all, runtime group-policy fallback is `allowlist` (with a warning log), even if `channels.defaults.groupPolicy` is set. Group replies require mention by default. Mention detection includes: - explicit WhatsApp mentions of the bot identity -... | https://github.com/openclaw/openclaw/blob/main//docs/channels/whatsapp.md | main | opebclaw | [
-0.11229190975427628,
-0.03797934949398041,
0.029495110735297203,
0.03716892749071121,
0.06175968050956726,
-0.017223836854100227,
0.12332426011562347,
-0.013397094793617725,
0.012474105693399906,
0.03461562097072601,
0.04133131727576256,
-0.04592309892177582,
0.020423728972673416,
0.08983... | 0.049696 |
Yes (encouraged) | Ack + agent reactions with encouraged guidance | Default: `"minimal"`. Per-account overrides use `channels.whatsapp.accounts..reactionLevel`. ```json5 { channels: { whatsapp: { reactionLevel: "ack", }, }, } ``` ## Acknowledgment reactions WhatsApp supports immediate ack reactions on inbound receipt v... | https://github.com/openclaw/openclaw/blob/main//docs/channels/whatsapp.md | main | opebclaw | [
-0.12785112857818604,
0.015822099521756172,
0.029064496979117393,
0.06523781269788742,
0.08439413458108902,
-0.046289220452308655,
0.022558337077498436,
-0.014188740402460098,
0.08376096189022064,
0.05810699984431267,
0.03159552067518234,
-0.014225238934159279,
-0.009624500758945942,
0.084... | 0.08042 |
# Matrix Matrix is a bundled channel plugin for OpenClaw. It uses the official `matrix-js-sdk` and supports DMs, rooms, threads, media, reactions, polls, location, and E2EE. ## Bundled plugin Matrix ships as a bundled plugin in current OpenClaw releases, so normal packaged builds do not need a separate install. If you ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.003384131472557783,
-0.03324912488460541,
-0.1682434231042862,
0.00600651279091835,
0.008720600046217442,
-0.02417864091694355,
-0.09369157254695892,
-0.07665444910526276,
-0.011428863741457462,
0.033666063100099564,
0.05315350741147995,
-0.010432512499392033,
-0.0023677328135818243,
0.... | 0.027713 |
auth is not set directly in config. Environment variable equivalents (used when the config key is not set): - `MATRIX\_HOMESERVER` - `MATRIX\_ACCESS\_TOKEN` - `MATRIX\_USER\_ID` - `MATRIX\_PASSWORD` - `MATRIX\_DEVICE\_ID` - `MATRIX\_DEVICE\_NAME` For non-default accounts, use account-scoped env vars: - `MATRIX\_\_HOMES... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
0.03309675678610802,
-0.027992112562060356,
-0.11737211793661118,
-0.008485735394060612,
-0.004375535994768143,
-0.03008364327251911,
0.0352068729698658,
0.029214711859822273,
-0.05700867995619774,
0.018306301906704903,
0.025468099862337112,
-0.06007692962884903,
0.051851291209459305,
0.01... | -0.019925 |
final-only delivery. With `streaming: "off"`: - `blockStreaming: true` sends each finished block as a normal notifying Matrix message. - `blockStreaming: false` sends only the final completed reply as a normal notifying Matrix message. ### Self-hosted push rules for quiet finalized previews If you run your own Matrix i... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.0266716405749321,
-0.03943834826350212,
-0.015763867646455765,
0.029182029888033867,
0.07715518027544022,
-0.09498655796051025,
-0.04693693667650223,
-0.0908682718873024,
0.027630547061562538,
0.007958317175507545,
-0.007345294114202261,
-0.011663403362035751,
-0.013343987055122852,
0.0... | -0.006227 |
quiet draft preview and the final in-place edit should notify once the block or turn finishes. If you need to remove the rule later, delete that same rule ID with the receiving user's token: ```bash curl -sS -X DELETE \ -H "Authorization: Bearer $USER\_ACCESS\_TOKEN" \ "https://matrix.example.org/\_matrix/client/v3/pus... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.054576434195041656,
0.004350645933300257,
-0.03283411264419556,
0.03996846452355385,
0.005650460720062256,
-0.04810289666056633,
-0.07777171581983566,
-0.06850798428058624,
0.041793450713157654,
0.01092490553855896,
0.04572812840342522,
0.055191922932863235,
0.028481438755989075,
-0.040... | -0.01505 |
``` Bootstrap cross-signing and verification state: ```bash openclaw matrix verify bootstrap ``` Verbose bootstrap diagnostics: ```bash openclaw matrix verify bootstrap --verbose ``` Force a fresh cross-signing identity reset before bootstrapping: ```bash openclaw matrix verify bootstrap --force-reset-cross-signing ```... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.021254686638712883,
-0.03557277470827103,
-0.09353870153427124,
0.007612921297550201,
0.07633313536643982,
-0.06390250474214554,
0.00354743842035532,
-0.02101658284664154,
-0.059018976986408234,
-0.01903204433619976,
0.07567765563726425,
0.013515877537429333,
0.1197277382016182,
-0.0457... | -0.062322 |
### Fresh backup baseline If you want to keep future encrypted messages working and accept losing unrecoverable old history, run these commands in order: ```bash openclaw matrix verify backup reset --yes openclaw matrix verify backup status --verbose openclaw matrix verify status ``` Add `--account ` to each command wh... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.07430168986320496,
-0.01061027031391859,
-0.11786924302577972,
0.046152450144290924,
0.0306331105530262,
-0.07173236459493637,
-0.04966001957654953,
-0.050976164638996124,
-0.007235030643641949,
-0.00801482517272234,
0.037044696509838104,
0.01865522749722004,
0.06313695013523102,
0.0173... | -0.013879 |
set --name "OpenClaw Assistant" openclaw matrix profile set --avatar-url https://cdn.example.org/avatar.png ``` Add `--account ` when you want to target a named Matrix account explicitly. Matrix accepts `mxc://` avatar URLs directly. When you pass an `http://` or `https://` avatar URL, OpenClaw uploads it to Matrix fir... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.05187973007559776,
-0.0605873242020607,
-0.11188175529241562,
0.02528279460966587,
-0.03133604675531387,
-0.10381782799959183,
0.011083445511758327,
-0.05572156235575676,
0.01157230231910944,
0.05511543154716492,
0.04224015772342682,
-0.04993422329425812,
0.049826573580503464,
0.0258718... | 0.014043 |
- `threadBindings.idleHours` - `threadBindings.maxAgeHours` - `threadBindings.spawnSubagentSessions` - `threadBindings.spawnAcpSessions` Matrix thread-bound spawn flags are opt-in: - Set `threadBindings.spawnSubagentSessions: true` to allow top-level `/focus` to create and bind new Matrix threads. - Set `threadBindings... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.12355861067771912,
-0.015351072885096073,
-0.1115645095705986,
0.010831503197550774,
0.0034279858227819204,
-0.00020706346549559385,
0.03363874554634094,
0.022502748295664787,
0.06659379601478577,
0.07733722031116486,
0.045395273715257645,
-0.06622761487960815,
-0.017391415312886238,
-0... | 0.172203 |
for a peer with: ```bash openclaw matrix direct inspect --user-id @alice:example.org ``` Repair it with: ```bash openclaw matrix direct repair --user-id @alice:example.org ``` The repair flow: - prefers a strict 1:1 DM that is already mapped in `m.direct` - falls back to any currently joined strict 1:1 DM with that use... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.0027703074738383293,
-0.06164206936955452,
-0.046414583921432495,
0.0007540878723375499,
-0.0006895320839248598,
-0.06961199641227722,
0.00010912387369899079,
-0.06320276111364365,
-0.005831435788422823,
0.048891615122556686,
0.04756148159503937,
0.05504952371120453,
0.012955140322446823,... | 0.061123 |
later. If Matrix already has exactly one named account, or `defaultAccount` points at an existing named account key, single-account-to-multi-account repair/setup promotion preserves that account instead of creating a fresh `accounts.default` entry. Only Matrix auth/bootstrap keys move into that promoted account; shared... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.03630656376481056,
-0.06410664319992065,
-0.14314782619476318,
0.009343324229121208,
-0.055696941912174225,
-0.003368522971868515,
0.004328821320086718,
-0.06116575002670288,
-0.04643082618713379,
0.0013440285110846162,
0.0711025819182396,
-0.041576821357011795,
0.07474517822265625,
-0.... | -0.025852 |
policy to `allowlist`, and forces all active DM policies except `disabled` (including `pairing` and `open`) to `allowlist`. Does not affect `disabled` policies. - `allowBots`: allow messages from other configured OpenClaw Matrix accounts (`true` or `"mentions"`). - `groupPolicy`: `open`, `allowlist`, or `disabled`. - `... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.03919588029384613,
-0.002895048353821039,
-0.08253617584705353,
0.06424297392368317,
0.04002643749117851,
-0.01829264499247074,
-0.01562720537185669,
-0.06663994491100311,
-0.09530462324619293,
0.028477665036916733,
0.08408234268426895,
0.03241763636469841,
0.034969210624694824,
0.03829... | 0.031917 |
- `groups..autoReply`: room-level mention-gating override. `true` disables mention requirements for that room; `false` forces them back on. - `groups..skills`: optional room-level skill filter. - `groups..systemPrompt`: optional room-level system prompt snippet. - `rooms`: legacy alias for `groups`. - `actions`: per-ac... | https://github.com/openclaw/openclaw/blob/main//docs/channels/matrix.md | main | opebclaw | [
-0.010369778610765934,
-0.08510038256645203,
-0.04445207864046097,
0.016539089381694794,
0.008306370116770267,
-0.006041472777724266,
0.1054098829627037,
-0.038840100169181824,
-0.07156680524349213,
-0.012638996355235577,
0.04417482018470764,
-0.014759857207536697,
0.03009442239999771,
0.0... | 0.103999 |
# Nostr \*\*Status:\*\* Optional bundled plugin (disabled by default until configured). Nostr is a decentralized protocol for social networking. This channel enables OpenClaw to receive and respond to encrypted direct messages (DMs) via NIP-04. ## Bundled plugin Current OpenClaw releases ship Nostr as a bundled plugin,... | https://github.com/openclaw/openclaw/blob/main//docs/channels/nostr.md | main | opebclaw | [
-0.0020847045816481113,
-0.027825793251395226,
-0.04297296702861786,
0.011187020689249039,
-0.005138083826750517,
-0.0860937163233757,
-0.09172169864177704,
-0.034415170550346375,
-0.06103377789258957,
-0.06014348566532135,
0.06218332052230835,
-0.04271573945879936,
-0.011333862319588661,
... | 0.066811 |
Status | Description | | ------ | --------- | ------------------------------------- | | NIP-01 | Supported | Basic event format + profile metadata | | NIP-04 | Supported | Encrypted DMs (`kind:4`) | | NIP-17 | Planned | Gift-wrapped DMs | | NIP-44 | Planned | Versioned encryption | ## Testing ### Local relay ```bash # ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/nostr.md | main | opebclaw | [
-0.006915048696100712,
0.0985165610909462,
-0.040351878851652145,
-0.019213156774640083,
0.05732491984963417,
-0.09534627944231033,
-0.019615689292550087,
-0.030836790800094604,
-0.02378559298813343,
0.03307325765490532,
0.00006254741310840473,
-0.02409372478723526,
-0.041856225579977036,
... | 0.067167 |
# QQ Bot QQ Bot connects to OpenClaw via the official QQ Bot API (WebSocket gateway). The plugin supports C2C private chat, group @messages, and guild channel messages with rich media (images, voice, video, files). Status: bundled plugin. Direct messages, group chats, guild channels, and media are supported. Reactions ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/qqbot.md | main | opebclaw | [
-0.09226422756910324,
-0.05186077952384949,
-0.009985477663576603,
-0.001205191481858492,
-0.0414402112364769,
-0.035128939896821976,
-0.07587013393640518,
0.001920171082019806,
0.0534900538623333,
-0.002214463660493493,
0.04529270902276039,
-0.09637650847434998,
0.04621507599949837,
0.065... | 0.000899 |
| Append `?` to any command for usage help (for example `/bot-upgrade ?`). ## Troubleshooting - \*\*Bot replies "gone to Mars":\*\* credentials not configured or Gateway not started. - \*\*No inbound messages:\*\* verify `appId` and `clientSecret` are correct, and the bot is enabled on the QQ Open Platform. - \*\*Setup... | https://github.com/openclaw/openclaw/blob/main//docs/channels/qqbot.md | main | opebclaw | [
-0.02745167352259159,
-0.006038656923919916,
-0.023908114060759544,
0.0054796128533780575,
-0.034943342208862305,
0.006077956408262253,
-0.0176816675812006,
0.04881879687309265,
0.036226626485586166,
0.0768052414059639,
0.052288543432950974,
-0.06955688446760178,
0.07068400084972382,
0.047... | 0.035345 |
# iMessage (legacy: imsg) For new iMessage deployments, use [BlueBubbles](/channels/bluebubbles). The `imsg` integration is legacy and may be removed in a future release. Status: legacy external CLI integration. Gateway spawns `imsg rpc` and communicates over JSON-RPC on stdio (no separate daemon/port). Preferred iMess... | https://github.com/openclaw/openclaw/blob/main//docs/channels/imessage.md | main | opebclaw | [
-0.05474960803985596,
-0.07146722078323364,
0.022502383217215538,
-0.004259753506630659,
-0.06317391991615295,
-0.07474574446678162,
0.007728575263172388,
0.0003791186900343746,
0.00986320897936821,
-0.021634599193930626,
0.03497311845421791,
0.020470743998885155,
-0.07115432620048523,
0.1... | 0.068511 |
iMessage conversation route to the spawned ACP session. - `/new` and `/reset` reset the same bound ACP session in place. - `/acp close` closes the ACP session and removes the binding. Configured persistent bindings are supported through top-level `bindings[]` entries with `type: "acp"` and `match.channel: "imessage"`. ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/imessage.md | main | opebclaw | [
-0.10384364426136017,
0.020582767203450203,
0.01799042895436287,
0.03734106943011284,
-0.06252950429916382,
-0.03613854944705963,
0.051520224660634995,
-0.019404377788305283,
0.039398178458213806,
0.012047664262354374,
0.0022750122006982565,
0.03357686847448349,
-0.042660195380449295,
0.10... | 0.064385 |
Mac running Messages Re-run in an interactive GUI terminal in the same user/session context and approve prompts: ```bash imsg chats --limit 1 imsg send "test" ``` Confirm Full Disk Access + Automation are granted for the process context that runs OpenClaw/`imsg`. ## Configuration reference pointers - [Configuration ref... | https://github.com/openclaw/openclaw/blob/main//docs/channels/imessage.md | main | opebclaw | [
-0.03081209771335125,
-0.0693882554769516,
0.032118745148181915,
0.0017644111067056656,
-0.0003951210528612137,
-0.1202920526266098,
0.08554364740848541,
-0.006943513173609972,
0.05012114346027374,
0.011834349483251572,
0.027291085571050644,
0.03988807275891304,
0.02372189238667488,
0.0767... | 0.13598 |
# Groups OpenClaw treats group chats consistently across surfaces: Discord, iMessage, Matrix, Microsoft Teams, Signal, Slack, Telegram, WhatsApp, Zalo. ## Beginner intro (2 minutes) OpenClaw “lives” on your own messaging accounts. There is no separate WhatsApp bot user. If \*\*you\*\* are in a group, OpenClaw can see t... | https://github.com/openclaw/openclaw/blob/main//docs/channels/groups.md | main | opebclaw | [
-0.08096636831760406,
-0.04277639091014862,
-0.009058218449354172,
0.0786818340420723,
0.03746648132801056,
-0.11267198622226715,
0.027592787519097328,
-0.04372774437069893,
0.033659957349300385,
0.005685589741915464,
0.04324560612440109,
-0.06168102100491524,
0.009558107703924179,
0.03866... | 0.041474 |
tools): ```json5 { agents: { defaults: { sandbox: { mode: "non-main", // groups/channels are non-main -> sandboxed scope: "session", // strongest isolation (one container per group/channel) workspaceAccess: "none", }, }, }, tools: { sandbox: { tools: { // If allow is non-empty, everything else is blocked (deny still wi... | https://github.com/openclaw/openclaw/blob/main//docs/channels/groups.md | main | opebclaw | [
-0.006312199868261814,
0.0638996958732605,
-0.05924506112933159,
0.02992197684943676,
0.06930270045995712,
-0.04739423096179962,
-0.004531810060143471,
-0.11378460377454758,
0.006744794547557831,
0.0067898486740887165,
0.04426216334104538,
-0.04267866536974907,
-0.001168657559901476,
0.083... | 0.106033 |
that expose quote metadata. Current built-in cases include Telegram, WhatsApp, Slack, Discord, Microsoft Teams, and ZaloUser. ```json5 { channels: { whatsapp: { groups: { "\*": { requireMention: true }, "123@g.us": { requireMention: false }, }, }, telegram: { groups: { "\*": { requireMention: true }, "123456789": { req... | https://github.com/openclaw/openclaw/blob/main//docs/channels/groups.md | main | opebclaw | [
-0.06637412309646606,
0.093403659760952,
-0.011845920234918594,
0.04405364766716957,
0.022839894518256187,
-0.019398024305701256,
0.05511092767119408,
-0.019741082563996315,
0.03255484253168106,
0.023323604837059975,
-0.013673900626599789,
-0.06627242267131805,
0.05669650435447693,
0.08725... | 0.121531 |
a standalone message. Other surfaces currently ignore `/activation`. ## Context fields Group inbound payloads set: - `ChatType=group` - `GroupSubject` (if known) - `GroupMembers` (if known) - `WasMentioned` (mention gating result) - Telegram forum topics also include `MessageThreadId` and `IsForum`. Channel specific no... | https://github.com/openclaw/openclaw/blob/main//docs/channels/groups.md | main | opebclaw | [
-0.05942891538143158,
-0.06841903179883957,
0.023280570283532143,
0.05340585485100746,
0.03384564816951752,
-0.07626315206289291,
0.13407467305660248,
-0.037157487124204636,
0.05114379897713661,
0.01327382493764162,
0.04763048514723778,
-0.05548318475484848,
0.02844039537012577,
0.04903648... | 0.188405 |
# Zalo Personal (unofficial) Status: experimental. This integration automates a \*\*personal Zalo account\*\* via native `zca-js` inside OpenClaw. > \*\*Warning:\*\* This is an unofficial integration and may result in account suspension/ban. Use at your own risk. ## Bundled plugin Zalo Personal ships as a bundled plugi... | https://github.com/openclaw/openclaw/blob/main//docs/channels/zalouser.md | main | opebclaw | [
-0.10092136263847351,
0.0404767207801342,
-0.10119838267564774,
0.06863878667354584,
0.0014781373320147395,
-0.11888437718153,
0.051151491701602936,
-0.0109837856143713,
-0.040387921035289764,
-0.010169693268835545,
0.05556220933794975,
-0.04612930491566658,
0.07578856498003006,
0.06500815... | 0.076871 |
}, }, } ``` ### Group mention gating - `channels.zalouser.groups..requireMention` controls whether group replies require a mention. - Resolution order: exact group id/name -> normalized group slug -> `*` -> default (`true`). - This applies both to allowlisted groups and open group mode. - Quoting a bot message counts a... | https://github.com/openclaw/openclaw/blob/main//docs/channels/zalouser.md | main | opebclaw | [
-0.04484163597226143,
0.0410030372440815,
-0.01879063993692398,
0.11031300574541092,
0.053705260157585144,
-0.10156659781932831,
0.043483808636665344,
-0.07655202597379684,
0.037598785012960434,
-0.02979593351483345,
0.020236089825630188,
0.005181625951081514,
0.0427820049226284,
0.0856361... | 0.040766 |
# Telegram (Bot API) Status: production-ready for bot DMs + groups via grammY. Long polling is the default mode; webhook mode is optional. Default DM policy for Telegram is pairing. Cross-channel diagnostics and repair playbooks. Full channel config patterns and examples. ## Quick setup Open Telegram and chat with \*\*... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.025121867656707764,
-0.0637623593211174,
0.028500298038125038,
-0.022296644747257233,
-0.04565830156207085,
-0.021055586636066437,
-0.023985354229807854,
-0.01087394542992115,
-0.015030136331915855,
0.07715822011232376,
-0.0015050634974613786,
-0.04773919656872749,
-0.04527948051691055,
... | 0.106846 |
(default) - `disabled` `groupAllowFrom` is used for group sender filtering. If not set, Telegram falls back to `allowFrom`. `groupAllowFrom` entries should be numeric Telegram user IDs (`telegram:` / `tg:` prefixes are normalized). Do not put Telegram group or supergroup chat IDs in `groupAllowFrom`. Negative chat IDs ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.040595658123493195,
-0.014566363766789436,
0.00014982512220740318,
0.02086113765835762,
-0.052746403962373734,
-0.03204510360956192,
0.04341861605644226,
-0.0476900152862072,
0.0033677408937364817,
0.07682973146438599,
0.0525277704000473,
-0.03913402557373047,
0.03152076527476311,
0.048... | 0.049261 |
example media payloads), OpenClaw falls back to normal final delivery and then cleans up the preview message. Preview streaming is separate from block streaming. When block streaming is explicitly enabled for Telegram, OpenClaw skips the preview stream to avoid double-streaming. If native draft transport is unavailable... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.023683415725827217,
-0.01012785080820322,
0.03658081591129303,
0.05410752817988396,
0.0819290354847908,
-0.08549761027097702,
-0.05702167749404907,
-0.034196700900793076,
0.058825135231018066,
0.029831301420927048,
-0.007295736577361822,
0.014670397154986858,
-0.01239296980202198,
0.050... | 0.073022 |
include: - `sendMessage` (`to`, `content`, optional `mediaUrl`, `replyToMessageId`, `messageThreadId`) - `react` (`chatId`, `messageId`, `emoji`) - `deleteMessage` (`chatId`, `messageId`) - `editMessage` (`chatId`, `messageId`, `content`) - `createForumTopic` (`chatId`, `name`, optional `iconColor`, `iconCustomEmojiId`... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.051495619118213654,
-0.034454185515642166,
0.019383834674954414,
0.10816825926303864,
0.087420754134655,
0.05651012063026428,
0.0648789331316948,
0.048741791397333145,
0.052741069346666336,
0.042910411953926086,
0.01636955514550209,
-0.012676222249865532,
0.011971443891525269,
0.0394967... | 0.108429 |
notes do not support captions; provided message text is sent separately. ### Stickers Inbound sticker handling: - static WEBP: downloaded and processed (placeholder ``) - animated TGS: skipped - video WEBM: skipped Sticker context fields: - `Sticker.emoji` - `Sticker.setName` - `Sticker.fileId` - `Sticker.fileUniqueId`... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.07002639025449753,
0.006870478391647339,
0.0770576149225235,
0.021845389157533646,
0.07693108916282654,
-0.017711574211716652,
-0.003981676418334246,
-0.001354486565105617,
0.0634734034538269,
-0.020311318337917328,
0.018916139379143715,
-0.021661074832081795,
-0.0042287614196538925,
0.... | -0.001466 |
--target -1001234567890:topic:42 \ --poll-question "Pick a time" --poll-option "10am" --poll-option "2pm" \ --poll-duration-seconds 300 --poll-public ``` Telegram-only poll flags: - `--poll-duration-seconds` (5-600) - `--poll-anonymous` - `--poll-public` - `--thread-id` for forum topics (or use a `:topic:` target) Tele... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.02341507375240326,
-0.039613720029592514,
0.009326701983809471,
0.05456036329269409,
0.054860036820173264,
-0.04277222231030464,
0.055726587772369385,
-0.023878132924437523,
0.02295336313545704,
0.09928029775619507,
0.0049284277483820915,
-0.019400257617235184,
-0.004207424353808165,
0.... | 0.109015 |
other Telegram config keys). ```json5 { channels: { telegram: { errorPolicy: "reply", errorCooldownMs: 120000, groups: { "-1001234567890": { errorPolicy: "silent", // suppress errors in this group }, }, }, }, } ``` ## Troubleshooting - If `requireMention=false`, Telegram privacy mode must allow full visibility. - BotFa... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.03133297711610794,
-0.018430087715387344,
0.010215573012828827,
0.07904751598834991,
0.03086661919951439,
-0.08747822791337967,
0.006509796716272831,
-0.05857449769973755,
-0.025831470265984535,
0.018713533878326416,
0.04319378361105919,
-0.06833202391862869,
-0.008150894194841385,
0.06... | 0.085151 |
group sender allowlist (numeric Telegram user IDs). `openclaw doctor --fix` can resolve legacy `@username` entries to IDs. Non-numeric entries are ignored at auth time. Group auth does not use DM pairing-store fallback (`2026.2.25+`). - Multi-account precedence: - When two or more account IDs are configured, set `chann... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.009635026566684246,
-0.023315640166401863,
-0.029312048107385635,
0.048646923154592514,
-0.048702411353588104,
-0.08267640322446823,
0.009105782024562359,
-0.016392743214964867,
-0.0024999871384352446,
0.008730554021894932,
0.032112956047058105,
-0.0401955246925354,
0.013614356517791748,
... | 0.022253 |
local webhook bind host (default `127.0.0.1`). - `channels.telegram.webhookPort`: local webhook bind port (default `8787`). - `channels.telegram.actions.reactions`: gate Telegram tool reactions. - `channels.telegram.actions.sendMessage`: gate Telegram tool message sends. - `channels.telegram.actions.deleteMessage`: gat... | https://github.com/openclaw/openclaw/blob/main//docs/channels/telegram.md | main | opebclaw | [
-0.062393948435783386,
-0.00939904898405075,
-0.04760824888944626,
0.017736434936523438,
0.021865466609597206,
-0.0607394240796566,
0.020351137965917587,
0.01573125831782818,
0.042645055800676346,
0.09236317127943039,
0.03387888893485069,
-0.07873976230621338,
-0.004512274172157049,
-0.012... | 0.211469 |
# Nextcloud Talk Status: bundled plugin (webhook bot). Direct messages, rooms, reactions, and markdown messages are supported. ## Bundled plugin Nextcloud Talk ships as a bundled plugin in current OpenClaw releases, so normal packaged builds do not need a separate install. If you are on an older build or a custom insta... | https://github.com/openclaw/openclaw/blob/main//docs/channels/nextcloud-talk.md | main | opebclaw | [
-0.06722521781921387,
-0.03256969898939133,
-0.009655912406742573,
-0.04951098561286926,
0.025547001510858536,
-0.06658748537302017,
-0.07014226168394089,
-0.05853351950645447,
0.05980503559112549,
0.005038733594119549,
0.058153703808784485,
-0.07993996888399124,
-0.00716448575258255,
-0.0... | 0.042398 |
`channels.nextcloud-talk.blockStreaming`: disable block streaming for this channel. - `channels.nextcloud-talk.blockStreamingCoalesce`: block streaming coalesce tuning. - `channels.nextcloud-talk.mediaMaxMb`: inbound media cap (MB). ## Related - [Channels Overview](/channels) — all supported channels - [Pairing](/chann... | https://github.com/openclaw/openclaw/blob/main//docs/channels/nextcloud-talk.md | main | opebclaw | [
-0.02695268578827381,
-0.01901475340127945,
0.019188348203897476,
-0.10076969861984253,
0.009231450036168098,
-0.025617728009819984,
0.038748517632484436,
-0.07836809009313583,
0.056207358837127686,
0.03809097781777382,
-0.014390363357961178,
0.07821472734212875,
-0.00549022201448679,
0.03... | 0.068867 |
# Google Chat (Chat API) Status: ready for DMs + spaces via Google Chat API webhooks (HTTP only). ## Quick setup (beginner) 1. Create a Google Cloud project and enable the \*\*Google Chat API\*\*. - Go to: [Google Chat API Credentials](https://console.cloud.google.com/apis/api/chat.googleapis.com/credentials) - Enable ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/googlechat.md | main | opebclaw | [
-0.10940342396497726,
-0.05663745850324631,
0.011696627363562584,
-0.031246846541762352,
-0.05471125245094299,
-0.09877172112464905,
0.011439492926001549,
-0.050806812942028046,
0.006945687346160412,
0.05116930976510048,
0.010343683883547783,
-0.0706341564655304,
0.01527366228401661,
-0.01... | -0.024389 |
(port 8443):\*\* ```bash # If bound to localhost (127.0.0.1 or 0.0.0.0): tailscale serve --bg --https 8443 http://127.0.0.1:18789 # If bound to Tailscale IP only (e.g., 100.106.161.80): tailscale serve --bg --https 8443 http://100.106.161.80:18789 ``` 3. \*\*Expose only the webhook path publicly:\*\* ```bash # If bound... | https://github.com/openclaw/openclaw/blob/main//docs/channels/googlechat.md | main | opebclaw | [
0.009524781256914139,
0.08561164885759354,
-0.034104254096746445,
-0.06004676595330238,
-0.018102655187249184,
-0.11236647516489029,
-0.032782066613435745,
-0.01878010295331478,
-0.008278192020952702,
-0.010327069088816643,
0.03516048192977905,
-0.09901659935712814,
-0.014444314874708652,
... | 0.000821 |
and `channels action` when `actions.reactions` is enabled. - Message actions expose `send` for text and `upload-file` for explicit attachment sends. `upload-file` accepts `media` / `filePath` / `path` plus optional `message`, `filename`, and thread targeting. - `typingIndicator` supports `none`, `message` (default), an... | https://github.com/openclaw/openclaw/blob/main//docs/channels/googlechat.md | main | opebclaw | [
-0.013047215528786182,
-0.042641762644052505,
0.02917475625872612,
-0.029677782207727432,
0.030350927263498306,
-0.05546386167407036,
-0.02340232953429222,
-0.034120477735996246,
0.017834456637501717,
0.07808574289083481,
0.08342011272907257,
0.02308717370033264,
-0.003462189109995961,
0.0... | 0.035197 |
# Mattermost Status: bundled plugin (bot token + WebSocket events). Channels, groups, and DMs are supported. Mattermost is a self-hostable team messaging platform; see the official site at [mattermost.com](https://mattermost.com) for product details and downloads. ## Bundled plugin Mattermost ships as a bundled plugin ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/mattermost.md | main | opebclaw | [
-0.020357003435492516,
-0.07709641009569168,
-0.08013253659009933,
0.050704073160886765,
0.03768100589513779,
-0.08708071708679199,
0.000996012706309557,
0.015658488497138023,
-0.011165585368871689,
0.021925678476691246,
0.05605870857834816,
-0.0939149409532547,
-0.04359372332692146,
0.018... | 0.107291 |
in the main channel or start a thread under the triggering post. - `off` (default): only reply in a thread when the inbound post is already in one. - `first`: for top-level channel/group posts, start a thread under that post and route the conversation to a thread-scoped session. - `all`: same behavior as `first` for Ma... | https://github.com/openclaw/openclaw/blob/main//docs/channels/mattermost.md | main | opebclaw | [
-0.08410665392875671,
-0.053107406944036484,
0.0017604908207431436,
0.044312797486782074,
-0.010191532783210278,
-0.09862176328897476,
0.020759107545018196,
-0.06658882647752762,
0.050896476954221725,
0.056578535586595535,
-0.020026391372084618,
0.009700746275484562,
-0.06862122565507889,
... | 0.051776 |
receives the selection and can respond. Enable buttons by adding `inlineButtons` to the channel capabilities: ```json5 { channels: { mattermost: { capabilities: ["inlineButtons"], }, }, } ``` Use `message action=send` with a `buttons` parameter. Buttons are a 2D array (rows of buttons): ``` message action=send channel=... | https://github.com/openclaw/openclaw/blob/main//docs/channels/mattermost.md | main | opebclaw | [
-0.09157621115446091,
-0.016255345195531845,
-0.042480457574129105,
0.09817755222320557,
0.011099780909717083,
-0.035300057381391525,
0.03348071128129959,
-0.0129356374964118,
-0.019051065668463707,
-0.018178973346948624,
-0.004796145483851433,
-0.013438945636153221,
-0.03267207741737366,
... | 0.074363 |
sorted keys, which produces compact output). 4. Sign: `HMAC-SHA256(key=secret, data=serializedContext)` 5. Add the resulting hex digest as `_token` in the context. Python example: ```python import hmac, hashlib, json secret = hmac.new( b"openclaw-mattermost-interactions", bot_token.encode(), hashlib.sha256 ).hexdigest(... | https://github.com/openclaw/openclaw/blob/main//docs/channels/mattermost.md | main | opebclaw | [
-0.020024459809064865,
0.01208433322608471,
-0.03571284934878349,
-0.01459755003452301,
-0.03150896728038788,
-0.06216530129313469,
0.02866940386593342,
-0.029377982020378113,
0.01167320366948843,
-0.006824081763625145,
0.05205918848514557,
-0.008294220082461834,
0.03161746263504028,
0.028... | -0.047813 |
does not match the button's `id`. Set both to the same sanitized value. - Agent doesn't know about buttons: add `capabilities: ["inlineButtons"]` to the Mattermost channel config. ## Related - [Channels Overview](/channels) — all supported channels - [Pairing](/channels/pairing) — DM authentication and pairing flow - [... | https://github.com/openclaw/openclaw/blob/main//docs/channels/mattermost.md | main | opebclaw | [
-0.03433452174067497,
-0.05194230377674103,
-0.0881706178188324,
0.01744316704571247,
-0.050188563764095306,
0.008477593772113323,
0.07803872972726822,
-0.07905207574367523,
-0.07606811821460724,
0.015890145674347878,
0.005037799943238497,
0.02139326184988022,
0.0019352809758856893,
0.0766... | 0.044062 |
# IRC Use IRC when you want OpenClaw in classic channels (`#room`) and direct messages. IRC ships as an extension plugin, but it is configured in the main config under `channels.irc`. ## Quick start 1. Enable IRC config in `~/.openclaw/openclaw.json`. 2. Set at least: ```json5 { channels: { irc: { enabled: true, host: ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/irc.md | main | opebclaw | [
-0.006976914592087269,
-0.07292197644710541,
-0.12347003072500229,
0.0727585181593895,
0.03980002552270889,
-0.12051057070493698,
-0.048231806606054306,
-0.007676598150283098,
0.04421302676200867,
-0.0015053279930725694,
0.04616127535700798,
-0.005230376962572336,
-0.007511192932724953,
0.... | 0.100668 |
"gateway", "nodes", "cron", "browser"], }, "id:eigen": { deny: ["gateway", "nodes", "cron"], }, }, }, }, }, }, } ``` Notes: - `toolsBySender` keys should use `id:` for IRC sender identity values: `id:eigen` or `id:eigen!~eigen@174.127.248.171` for stronger matching. - Legacy unprefixed keys are still accepted and match... | https://github.com/openclaw/openclaw/blob/main//docs/channels/irc.md | main | opebclaw | [
-0.11138397455215454,
0.027078544721007347,
-0.08349498361349106,
0.04481774941086769,
0.03802931308746338,
-0.0524919256567955,
0.02805369719862938,
-0.07874682545661926,
0.025514008477330208,
0.009485863149166107,
0.01659611612558365,
0.029618248343467712,
0.026849225163459778,
0.0196440... | 0.062957 |
# Discord (Bot API) Status: ready for DMs and guild channels via the official Discord gateway. Discord DMs default to pairing mode. Native command behavior and command catalog. Cross-channel diagnostics and repair flow. ## Quick setup You will need to create a new application with a bot, add the bot to your server, and... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.04775867983698845,
-0.11878093332052231,
-0.012911179102957249,
0.02670353837311268,
-0.01860649324953556,
-0.054735761135816574,
-0.030418546870350838,
-0.027238670736551285,
-0.010294357314705849,
0.00661863898858428,
-0.0226602703332901,
-0.048174429684877396,
0.008794860914349556,
0... | 0.055101 |
`` and Server ID ``." If you prefer file-based config, set: ```json5 { channels: { discord: { enabled: true, token: { source: "env", provider: "default", id: "DISCORD\_BOT\_TOKEN", }, }, }, } ``` Env fallback for the default account: ```bash DISCORD\_BOT\_TOKEN=... ``` Plaintext `token` values are supported. SecretRef ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.07271953672170639,
-0.09941592812538147,
-0.014939550310373306,
0.033055804669857025,
-0.013175061903893948,
-0.016598360612988472,
0.011412447318434715,
0.03475536033511162,
0.01685922034084797,
0.025294922292232513,
0.00247199391014874,
0.0015105328056961298,
-0.01060559693723917,
0.0... | 0.111315 |
parent (`channel:`) to auto-create a thread. The thread title uses the first non-empty line of your message. - Use `openclaw message thread create` to create a thread directly. Do not pass `--message-id` for forum channels. Example: send to forum parent to create a thread ```bash openclaw message send --channel discord... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
0.028569674119353294,
-0.09411215037107468,
-0.05291047319769859,
0.04936809837818146,
0.011877860873937607,
-0.04335317015647888,
0.025964366272091866,
-0.01247519813477993,
0.08407068997621536,
-0.01826181448996067,
0.02401002310216427,
-0.1161055862903595,
0.04812624678015709,
0.0068597... | 0.062372 |
`allowlist` behavior: - guild must match `channels.discord.guilds` (`id` preferred, slug accepted) - optional sender allowlists: `users` (stable IDs recommended) and `roles` (role IDs only); if either is configured, senders are allowed when they match `users` OR `roles` - direct name/tag matching is disabled by default... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.037175342440605164,
-0.01025701779872179,
-0.03579526022076607,
-0.010822385549545288,
0.03943511098623276,
-0.04702010750770569,
0.04276401549577713,
-0.08505439013242722,
-0.05593355372548103,
-0.006436334457248449,
-0.011105721816420555,
-0.004127027466893196,
0.012488269247114658,
0... | 0.046362 |
implicit reply threading. Explicit `[[reply\_to\_\*]]` tags are still honored. `first` always attaches the implicit native reply reference to the first outbound Discord message for the turn. `batched` only attaches Discord's implicit native reply reference when the inbound turn was a debounced batch of multiple message... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.046891145408153534,
-0.09548082202672958,
-0.016876427456736565,
0.07063738256692886,
0.05582752078771591,
-0.0558319054543972,
0.020599406212568283,
0.003975353203713894,
0.04148498550057411,
-0.009094306267797947,
-0.019074395298957825,
-0.03598763793706894,
-0.02590249478816986,
0.00... | 0.099312 |
acp: { agent: "codex", backend: "acpx", mode: "persistent", cwd: "/workspace/openclaw", }, }, }, ], }, bindings: [ { type: "acp", agentId: "codex", match: { channel: "discord", accountId: "default", peer: { kind: "channel", id: "222222222222222222" }, }, acp: { label: "codex-main" }, }, ], channels: { discord: { guilds... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.06688031554222107,
-0.04417292773723602,
-0.061225779354572296,
0.0212941225618124,
-0.02893981896340847,
0.005016429349780083,
0.06090874969959259,
-0.03803763911128044,
0.06016669422388077,
0.008464161306619644,
0.005887645296752453,
-0.021086404100060463,
-0.019142042845487595,
0.018... | 0.075013 |
Watching - 4: Custom (uses the activity text as the status state; emoji is optional) - 5: Competing Auto presence example (runtime health signal): ```json5 { channels: { discord: { autoPresence: { enabled: true, intervalMs: 30000, minUpdateIntervalMs: 15000, exhaustedText: "token exhausted", }, }, }, } ``` Auto presenc... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.029371974989771843,
-0.03573420271277428,
0.06219489872455597,
-0.019534170627593994,
0.07251722365617752,
-0.005871968809515238,
0.08044449239969254,
0.00321765523403883,
0.046348024159669876,
0.019105402752757072,
0.008576267398893833,
-0.0719914361834526,
-0.02465793490409851,
0.0612... | 0.204171 |
OpenClaw uses Discord components v2 for exec approvals and cross-context markers. Discord message actions can also accept `components` for custom UI (advanced; requires constructing a component payload via the discord tool), while legacy `embeds` remain available but are not recommended. - `channels.discord.ui.componen... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.024700267240405083,
-0.09814529865980148,
0.018412431702017784,
0.008912389166653156,
0.06013273447751999,
-0.04316912963986397,
0.0327010415494442,
-0.05557829141616821,
0.052206434309482574,
-0.09588534384965897,
0.004691082052886486,
-0.08035849779844284,
-0.0008231152314692736,
0.08... | 0.07614 |
}, inboundWorker: { runTimeoutMs: 1800000, }, }, }, }, }, } ``` Use `eventQueue.listenerTimeout` for slow listener setup and `inboundWorker.runTimeoutMs` only if you want a separate safety valve for queued agent turns. `channels status --probe` permission checks only work for numeric channel IDs. If you use slug keys, ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/discord.md | main | opebclaw | [
-0.07529301941394806,
-0.01967488043010235,
-0.029618093743920326,
0.04004751518368721,
0.008448547683656216,
-0.043375272303819656,
0.019071122631430626,
-0.06119067594408989,
-0.004386782180517912,
0.002374806674197316,
-0.009447149932384491,
-0.002685574349015951,
0.0044122119434177876,
... | 0.038661 |
# LINE LINE connects to OpenClaw via the LINE Messaging API. The plugin runs as a webhook receiver on the gateway and uses your channel access token + channel secret for authentication. Status: bundled plugin. Direct messages, group chats, media, locations, Flex messages, template messages, and quick replies are suppor... | https://github.com/openclaw/openclaw/blob/main//docs/channels/line.md | main | opebclaw | [
-0.09751971811056137,
-0.00933854840695858,
-0.11930172890424728,
0.053860217332839966,
-0.019316621124744415,
-0.10715864598751068,
-0.044628359377384186,
0.009993775747716427,
0.05365116894245148,
-0.024150166660547256,
0.014797365292906761,
-0.05090336501598358,
-0.011421510949730873,
-... | 0.050147 |
card", contents: { /* Flex payload */ }, }, templateMessage: { type: "confirm", text: "Proceed?", confirmLabel: "Yes", confirmData: "yes", cancelLabel: "No", cancelData: "no", }, }, }, } ``` The LINE plugin also ships a `/card` command for Flex message presets: ``` /card info "Welcome" "Thanks for joining!" ``` ## ACP ... | https://github.com/openclaw/openclaw/blob/main//docs/channels/line.md | main | opebclaw | [
-0.0922532007098198,
-0.009778817184269428,
-0.0650082677602768,
0.08043915033340454,
0.02714400179684162,
0.007699146401137114,
0.06671168655157089,
0.05900215357542038,
0.03741145133972168,
-0.024824822321534157,
0.03572067618370056,
-0.025515984743833542,
-0.0732245147228241,
0.04216610... | 0.152016 |
### Run a model ``` ollama run gemma3 ``` ### Launch integrations ``` ollama launch ``` Configure and launch external applications to use Ollama models. This provides an interactive way to set up and start integrations with supported apps. #### Supported integrations - \*\*OpenCode\*\* - Open-source coding assistant - ... | https://github.com/ollama/ollama/blob/main//docs/cli.mdx | main | ollama | [
-0.06374117732048035,
-0.07610174268484116,
-0.03835046663880348,
-0.03080148436129093,
-0.01913115754723549,
-0.06525955349206924,
-0.052457381039857864,
0.05684833601117134,
-0.029690925031900406,
-0.002232319675385952,
0.02860848419368267,
-0.06889253109693527,
-0.0020432055462151766,
0... | 0.187907 |
## Install To install Ollama, run the following command: ```shell curl -fsSL https://ollama.com/install.sh | sh ``` ## Manual install If you are upgrading from a prior version, you should remove the old libraries with `sudo rm -rf /usr/lib/ollama` first. Download and extract the package: ```shell curl -fsSL https://oll... | https://github.com/ollama/ollama/blob/main//docs/linux.mdx | main | ollama | [
-0.04092478007078171,
0.006369300186634064,
-0.037535659968853,
0.03426586836576462,
0.0032200762070715427,
-0.12913955748081207,
-0.06516725569963455,
0.027653377503156662,
-0.012914850376546383,
-0.054279450327157974,
0.02890542335808277,
-0.009943856857717037,
-0.07885291427373886,
0.02... | 0.031845 |
## Nvidia Ollama supports Nvidia GPUs with compute capability 5.0+ and driver version 531 and newer. Check your compute compatibility to see if your card is supported: [https://developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus) | Compute Capability | Family | Cards | | ------------------ | --------... | https://github.com/ollama/ollama/blob/main//docs/gpu.mdx | main | ollama | [
-0.06072494760155678,
-0.03281419724225998,
-0.008617082610726357,
0.010285579599440098,
-0.0715637132525444,
-0.07295554131269455,
-0.09122493863105774,
-0.05798002704977989,
-0.06069985404610634,
-0.05849428474903107,
-0.01377780269831419,
-0.06710223853588104,
-0.06176045164465904,
0.02... | 0.013952 |
the following AMD GPUs via the ROCm library: > \*\*NOTE:\*\* > Additional AMD GPU support is provided by the Vulkan Library - see below. ### Linux Support Ollama requires the AMD ROCm v7 driver on Linux. You can install or upgrade using the `amdgpu-install` utility from [AMD's ROCm documentation](https://rocm.docs.amd.... | https://github.com/ollama/ollama/blob/main//docs/gpu.mdx | main | ollama | [
-0.01663152314722538,
-0.011973338201642036,
-0.05849824100732803,
0.037302590906620026,
-0.011925814673304558,
-0.10387745499610901,
-0.1243281364440918,
0.07326513528823853,
-0.057195648550987244,
-0.06570765376091003,
-0.003995376173406839,
-0.048461370170116425,
-0.09657999128103256,
0... | 0.080266 |
file an [issue](https://github.com/ollama/ollama/issues) for additional help. ### GPU Selection If you have multiple AMD GPUs in your system and want to limit Ollama to use a subset, you can set `ROCR\_VISIBLE\_DEVICES` to a comma separated list of GPUs. You can see the list of devices with `rocminfo`. If you want to i... | https://github.com/ollama/ollama/blob/main//docs/gpu.mdx | main | ollama | [
-0.002132037654519081,
0.053994882851839066,
-0.023275336250662804,
0.04884403571486473,
0.005909451749175787,
-0.12620513141155243,
-0.016763843595981598,
0.01785384863615036,
-0.05439216271042824,
-0.1113775297999382,
-0.00436269398778677,
-0.04678424447774887,
-0.043482646346092224,
-0.... | 0.072618 |
[Ollama](https://ollama.com) is the easiest way to get up and running with large language models such as gpt-oss, Gemma 3, DeepSeek-R1, Qwen3 and more. Get up and running with your first model or integrate Ollama with your favorite tools Download Ollama on macOS, Windows or Linux Ollama's cloud models offer larger mode... | https://github.com/ollama/ollama/blob/main//docs/index.mdx | main | ollama | [
-0.09046205133199692,
-0.10440309345722198,
-0.013958445750176907,
0.008966258727014065,
0.0052662501111626625,
-0.06417570263147354,
-0.04893427714705467,
0.05227525904774666,
-0.020333636552095413,
-0.04459034651517868,
-0.027149079367518425,
0.016569748520851135,
-0.0753353163599968,
0.... | 0.1588 |
## CPU only ```shell docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ``` ## Nvidia GPU Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installation). ### Install with Apt 1. Configure the repository ```sh... | https://github.com/ollama/ollama/blob/main//docs/docker.mdx | main | ollama | [
-0.018310999497771263,
0.04482314735651016,
-0.034718479961156845,
0.021612994372844696,
-0.013261478394269943,
-0.10679537802934647,
-0.055348314344882965,
-0.03146573156118393,
-0.020187778398394585,
0.0021899347193539143,
-0.015735596418380737,
-0.06658294796943665,
-0.06430896371603012,
... | -0.014173 |
## Table of Contents - [Importing a Safetensors adapter](#Importing-a-fine-tuned-adapter-from-Safetensors-weights) - [Importing a Safetensors model](#Importing-a-model-from-Safetensors-weights) - [Importing a GGUF file](#Importing-a-GGUF-based-model-or-adapter) - [Sharing models on ollama.com](#Sharing-your-model-on-ol... | https://github.com/ollama/ollama/blob/main//docs/import.mdx | main | ollama | [
-0.051124345511198044,
-0.039602912962436676,
-0.10002903640270233,
0.0024622101336717606,
0.02015787921845913,
-0.11493401229381561,
0.006706262938678265,
0.07728482037782669,
-0.10038064420223236,
-0.0363725870847702,
0.04753691330552101,
-0.0745464563369751,
0.025602364912629128,
-0.016... | 0.067607 |
Ollama can quantize FP16 and FP32 based models into different quantization levels using the `-q/--quantize` flag with the `ollama create` command. First, create a Modelfile with the FP16 or FP32 based model you wish to quantize. ```dockerfile FROM /path/to/my/gemma/f16/model ``` Use `ollama create` to then create the q... | https://github.com/ollama/ollama/blob/main//docs/import.mdx | main | ollama | [
0.041918907314538956,
0.01614234410226345,
-0.031969111412763596,
-0.02077670395374298,
-0.04269033297896385,
-0.042631663382053375,
-0.05411089211702347,
0.02885478176176548,
0.00807163305580616,
-0.02623891644179821,
0.06908523291349411,
-0.10996318608522415,
-0.04880409315228462,
0.0483... | 0.035231 |
Context length is the maximum number of tokens that the model has access to in memory. Ollama defaults to the following context lengths based on VRAM: - < 24 GiB VRAM: 4k context - 24-48 GiB VRAM: 32k context - >= 48 GiB VRAM: 256k context Tasks which require large context like web search, agents, and coding tools shou... | https://github.com/ollama/ollama/blob/main//docs/context-length.mdx | main | ollama | [
-0.004383053630590439,
-0.01019678171724081,
-0.025166112929582596,
0.005498227663338184,
-0.05269501730799675,
-0.054690923541784286,
-0.04283875226974487,
0.03956127166748047,
0.03880807384848595,
-0.03586261719465256,
0.015856662765145302,
-0.019820820540189743,
-0.019851626828312874,
-... | 0.12269 |
Ollama provides a powerful templating engine backed by Go's built-in templating engine to construct prompts for your large language model. This feature is a valuable tool to get the most out of your models. ## Basic Template Structure A basic Go template consists of three main parts: - \*\*Layout\*\*: The overall struc... | https://github.com/ollama/ollama/blob/main//docs/template.mdx | main | ollama | [
-0.003434383776038885,
0.059265945106744766,
0.019653258845210075,
0.04601706191897392,
-0.007986130192875862,
0.024331847205758095,
0.018331807106733322,
0.008220558054745197,
0.03514811396598816,
-0.06604050099849701,
-0.03548954799771309,
-0.02116565965116024,
-0.06656163185834885,
0.08... | 0.13127 |
useful for models trained to call external tools and can a powerful tool for retrieving real-time data or performing complex tasks. #### Mistral Mistral v0.3 and Mixtral 8x22B supports tool calling. ```go {{- range $index, $\_ := .Messages }} {{- if eq .Role "user" }} {{- if and (le (len (slice $.Messages $index)) 2) $... | https://github.com/ollama/ollama/blob/main//docs/template.mdx | main | ollama | [
-0.06856029480695724,
0.052831538021564484,
0.01850089244544506,
0.06856746226549149,
0.055745068937540054,
-0.049720730632543564,
0.09846999496221542,
0.0014336345484480262,
0.050857383757829666,
-0.09451466798782349,
0.04247032850980759,
-0.043739739805459976,
-0.03941496089100838,
0.043... | 0.100516 |
Ollama runs as a native Windows application, including NVIDIA and AMD Radeon GPU support. After installing Ollama for Windows, Ollama will run in the background and the `ollama` command line is available in `cmd`, `powershell` or your favorite terminal application. As usual the Ollama [API](/api) will be served on `htt... | https://github.com/ollama/ollama/blob/main//docs/windows.mdx | main | ollama | [
-0.03704298287630081,
0.009693422354757786,
-0.07612410932779312,
0.026360860094428062,
-0.012943378649652004,
-0.05005386099219322,
-0.06873771548271179,
0.025499261915683746,
-0.02805541455745697,
-0.050722528249025345,
0.0050880927592515945,
0.023842962458729744,
-0.0818113312125206,
0.... | 0.053301 |
Ollama CLI and GPU library dependencies for Nvidia. Depending on your hardware, you may also need to download and extract additional packages into the same directory: - \*\*AMD GPU\*\*: `ollama-windows-amd64-rocm.zip` - \*\*MLX (CUDA)\*\*: `ollama-windows-amd64-mlx.zip` This allows for embedding Ollama in existing appl... | https://github.com/ollama/ollama/blob/main//docs/windows.mdx | main | ollama | [
-0.015494641847908497,
-0.06562402844429016,
-0.007521073333919048,
-0.007451625075191259,
0.0029520308598876,
-0.09316859394311905,
-0.061559949070215225,
-0.04807673767209053,
-0.032327089458703995,
-0.09548605233430862,
0.0012919866712763906,
-0.015650374814867973,
-0.100837841629982,
0... | 0.015244 |
## How can I upgrade Ollama? Ollama on macOS and Windows will automatically download updates. Click on the taskbar or menubar item and then click "Restart to update" to apply the update. Updates can also be installed by downloading the latest version [manually](https://ollama.com/download/). On Linux, re-run the instal... | https://github.com/ollama/ollama/blob/main//docs/faq.mdx | main | ollama | [
0.014703811146318913,
0.026955466717481613,
-0.05655843764543533,
-0.00534355454146862,
-0.009639455005526543,
-0.1354449987411499,
-0.009170602075755596,
0.0472843274474144,
-0.04303818941116333,
-0.009044812992215157,
-0.009132587350904942,
0.015568309463560581,
-0.023785226047039032,
-0... | 0.080442 |
model pulls, only HTTPS. Setting `HTTP\_PROXY` may interrupt client connections to the server. ### How do I use Ollama behind a proxy in Docker? The Ollama Docker container image can be configured to use a proxy by passing `-e HTTPS\_PROXY=https://proxy.example.com` when starting the container. Alternatively, the Docke... | https://github.com/ollama/ollama/blob/main//docs/faq.mdx | main | ollama | [
-0.06358307600021362,
0.08359897881746292,
-0.002794447587803006,
-0.009083681739866734,
-0.06640937179327011,
-0.11170408129692078,
0.0017550579505041242,
-0.002185559831559658,
0.02354907989501953,
0.017291175201535225,
-0.03192712366580963,
0.05341949313879013,
-0.010974508710205555,
0.... | 0.020228 |
- Linux: `/usr/share/ollama/.ollama/models` - Windows: `C:\Users\%username%\.ollama\models` ### How do I set them to a different location? If a different directory needs to be used, set the environment variable `OLLAMA\_MODELS` to the chosen directory. On Linux using the standard installer, the `ollama` user needs read... | https://github.com/ollama/ollama/blob/main//docs/faq.mdx | main | ollama | [
0.0038022352382540703,
-0.04839973524212837,
-0.06625206768512726,
0.045906975865364075,
-0.05945955961942673,
-0.05343763530254364,
-0.027599390596151352,
0.08937833458185196,
-0.003096052212640643,
-0.03508928790688515,
0.04709382727742195,
-0.031008008867502213,
-0.07907439768314362,
0.... | 0.057669 |
into memory by setting the `OLLAMA\_KEEP\_ALIVE` environment variable when starting the Ollama server. The `OLLAMA\_KEEP\_ALIVE` variable uses the same parameter types as the `keep\_alive` parameter types mentioned above. Refer to the section explaining [how to configure the Ollama server](#how-do-i-configure-ollama-se... | https://github.com/ollama/ollama/blob/main//docs/faq.mdx | main | ollama | [
-0.05421805381774902,
0.008151926100254059,
-0.10336845368146896,
0.0702553540468216,
-0.10069780051708221,
-0.1284675896167755,
0.009679469279944897,
-0.030473360791802406,
0.022505182772874832,
0.031054144725203514,
-0.02962592802941799,
0.05418657138943672,
-0.0665963739156723,
-0.03698... | 0.156725 |
usage as the context size grows. To enable Flash Attention, set the `OLLAMA\_FLASH\_ATTENTION` environment variable to `1` when starting the Ollama server. ## How can I set the quantization type for the K/V cache? The K/V context cache can be quantized to significantly reduce memory usage when Flash Attention is enable... | https://github.com/ollama/ollama/blob/main//docs/faq.mdx | main | ollama | [
0.05578601732850075,
-0.01681230030953884,
-0.05485750734806061,
0.0475405678153038,
-0.048069342970848083,
-0.008230950683355331,
0.02139272354543209,
0.03409326449036598,
0.048200204968452454,
-0.010908999480307102,
0.045719970017671585,
0.03683794289827347,
-0.13764432072639465,
0.03513... | 0.047572 |
A Modelfile is the blueprint to create and share customized models using Ollama. ## Table of Contents - [Format](#format) - [Examples](#examples) - [Instructions](#instructions) - [FROM (Required)](#from-required) - [Build from existing model](#build-from-existing-model) - [Build from a Safetensors model](#build-from-a... | https://github.com/ollama/ollama/blob/main//docs/modelfile.mdx | main | ollama | [
-0.0652954950928688,
-0.010461916215717793,
-0.07122400403022766,
-0.004412225913256407,
0.02237812802195549,
0.004861124791204929,
-0.01059283409267664,
0.08594901859760284,
-0.04251429811120033,
0.020741945132613182,
0.03811251372098923,
-0.020277300849556923,
0.05950946360826492,
-0.054... | 0.089384 |
| num\_ctx | Sets the size of the context window used to generate the next token. (Default: 2048) | int | num\_ctx 4096 | | repeat\_last\_n | Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num\_ctx) | int | repeat\_last\_n 64 | | repeat\_penalty | Sets how strongly ... | https://github.com/ollama/ollama/blob/main//docs/modelfile.mdx | main | ollama | [
-0.017187684774398804,
-0.020023126155138016,
-0.025267038494348526,
0.05982105806469917,
-0.03005521558225155,
0.05990017578005791,
0.06248698756098747,
0.015257498249411583,
0.04415355622768402,
-0.044162556529045105,
-0.008790258318185806,
-0.05629337579011917,
0.10083746165037155,
-0.0... | 0.124655 |
be specified with a `FROM` instruction. If the base model is not the same as the base model that the adapter was tuned from the behaviour will be erratic. #### Safetensor adapter ``` ADAPTER ``` Currently supported Safetensor adapters: - Llama (including Llama 2, Llama 3, and Llama 3.1) - Mistral (including Mistral 1, ... | https://github.com/ollama/ollama/blob/main//docs/modelfile.mdx | main | ollama | [
-0.04271628335118294,
-0.052664823830127716,
-0.023356927558779716,
-0.018642829731106758,
0.029636552557349205,
-0.051560912281274796,
-0.012565537355840206,
0.004776342771947384,
-0.02868194878101349,
-0.03908482566475868,
0.058067481964826584,
-0.06709016114473343,
0.053125444799661636,
... | 0.069349 |
Ollama is available on macOS, Windows, and Linux. [Download Ollama](https://ollama.com/download) ## Get Started Run `ollama` in your terminal to open the interactive menu: ```sh ollama ``` Navigate with `↑/↓`, press `enter` to launch, `→` to change model, and `esc` to quit. The menu provides quick access to: - \*\*Run ... | https://github.com/ollama/ollama/blob/main//docs/quickstart.mdx | main | ollama | [
-0.10355298221111298,
-0.06656967103481293,
-0.062217533588409424,
0.018898475915193558,
-0.026159167289733887,
-0.08744782954454422,
-0.04760514572262764,
0.032845914363861084,
-0.008564048446714878,
0.006529581733047962,
0.022348282858729362,
-0.01985682174563408,
-0.02979464828968048,
-... | 0.218563 |
## Cloud Models Ollama's cloud models are a new kind of model in Ollama that can run without a powerful GPU. Instead, cloud models are automatically offloaded to Ollama's cloud service while offering the same capabilities as local models, making it possible to keep using your local tools while running larger models tha... | https://github.com/ollama/ollama/blob/main//docs/cloud.mdx | main | ollama | [
-0.06485563516616821,
-0.061392780393362045,
-0.030363405123353004,
0.04743262007832527,
-0.01290776114910841,
-0.10485905408859253,
-0.07925081253051758,
0.007906731218099594,
0.02147585339844227,
-0.027447396889328957,
0.002947772853076458,
0.024645134806632996,
-0.04560761898756027,
-0.... | 0.043686 |
## System Requirements \* MacOS Sonoma (v14) or newer \* Apple M series (CPU and GPU support) or x86 (CPU only) ## Filesystem Requirements The preferred method of installation is to mount the `ollama.dmg` and drag-and-drop the Ollama application to the system-wide `Applications` folder. Upon startup, the Ollama app wil... | https://github.com/ollama/ollama/blob/main//docs/macos.mdx | main | ollama | [
0.013805510476231575,
-0.03131522610783577,
-0.07331958413124084,
-0.08696793764829636,
0.004233058542013168,
-0.06313952803611755,
-0.07788905501365662,
0.0697302594780922,
-0.027629686519503593,
-0.012828673236072063,
0.02906203083693981,
-0.04792023450136185,
-0.07769902795553207,
0.015... | 0.177024 |
Sometimes Ollama may not perform as expected. One of the best ways to figure out what happened is to take a look at the logs. Find the logs on \*\*Mac\*\* by running the command: ```shell cat ~/.ollama/logs/server.log ``` On \*\*Linux\*\* systems with systemd, the logs can be found with this command: ```shell journalct... | https://github.com/ollama/ollama/blob/main//docs/troubleshooting.mdx | main | ollama | [
0.011951694265007973,
0.025225885212421417,
-0.010181025601923466,
0.0647856667637825,
0.0016688266769051552,
-0.15552686154842377,
-0.04683705046772957,
-0.03886239975690842,
0.05032465234398842,
-0.029680440202355385,
0.011829668655991554,
-0.008676005527377129,
-0.06218002736568451,
0.0... | 0.029257 |
and add `"exec-opts": ["native.cgroupdriver=cgroupfs"]` to the docker configuration. ## NVIDIA GPU Discovery When Ollama starts up, it takes inventory of the GPUs present in the system to determine compatibility and how much VRAM is available. Sometimes this discovery can fail to find your GPUs. In general, running the... | https://github.com/ollama/ollama/blob/main//docs/troubleshooting.mdx | main | ollama | [
-0.006611411459743977,
0.011120551265776157,
-0.05625554919242859,
0.055859677493572235,
-0.017256448045372963,
-0.08611318469047546,
-0.08840566128492355,
-0.010657834820449352,
-0.05008772015571594,
-0.051279470324516296,
0.00234357756562531,
-0.09265340864658356,
-0.09175977110862732,
0... | -0.006582 |
documentation](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/). After upgrading, reboot and restart Ollama. ## Multiple AMD GPUs If you experience gibberish responses when models load across multiple AMD GPUs on Linux, see the following guide. - https://rocm.docs.amd.com/projects/radeon/en/latest/docs/i... | https://github.com/ollama/ollama/blob/main//docs/troubleshooting.mdx | main | ollama | [
0.04478640481829643,
-0.07080462574958801,
-0.028296785429120064,
0.044687945395708084,
0.03226131945848465,
-0.02901032567024231,
-0.03897736594080925,
0.09729482978582382,
-0.05453271046280861,
-0.0005158305866643786,
0.04999922960996628,
-0.00500431377440691,
0.03600011393427849,
0.0234... | 0.021725 |
Ollama's web search API can be used to augment models with the latest information to reduce hallucinations and improve accuracy. Web search is provided as a REST API with deeper tool integrations in the Python and JavaScript libraries. This also enables models like OpenAI’s gpt-oss models to conduct long-running resear... | https://github.com/ollama/ollama/blob/main//docs/capabilities/web-search.mdx | main | ollama | [
-0.11430098116397858,
0.01571500115096569,
-0.034217625856399536,
0.07934592664241791,
-0.03103749081492424,
-0.1142132580280304,
-0.0934114083647728,
0.02261880598962307,
-0.016371874138712883,
-0.06611230969429016,
-0.008288624696433544,
0.03658484295010567,
0.011986950412392616,
-0.0448... | 0.091264 |
with open models\*\*\n\n[Download](https://ollama.com/download) [Explore models](https://ollama.com/models)\n\nAvailable for macOS, Windows, and Linux', links=['https://ollama.com/', 'https://ollama.com/models', 'https://github.com/ollama/ollama'] ) ``` #### JavaScript SDK ```tsx import { Ollama } from "ollama"; const ... | https://github.com/ollama/ollama/blob/main//docs/capabilities/web-search.mdx | main | ollama | [
-0.07065140455961227,
0.013745056465268135,
0.0064282952807843685,
0.07596945762634277,
-0.006786528509110212,
-0.06569711118936539,
-0.06714356690645218,
0.02949276752769947,
0.06495554000139236,
-0.04360588639974594,
0.0031580114737153053,
0.011450626887381077,
-0.03133699297904968,
-0.0... | 0.07339 |
> Add the following configuration: ```json { "mcpServers": { "web\_search\_and\_fetch": { "type": "stdio", "command": "uv", "args": ["run", "path/to/web-search-mcp.py"], "env": { "OLLAMA\_API\_KEY": "your\_api\_key\_here" } } } } ```  ### Codex Ollama works well with Ope... | https://github.com/ollama/ollama/blob/main//docs/capabilities/web-search.mdx | main | ollama | [
-0.04255656525492668,
0.01574365608394146,
-0.003144487738609314,
0.030693596228957176,
0.01153378002345562,
-0.09999629855155945,
-0.05623004585504532,
0.049540240317583084,
-0.01732780784368515,
-0.05964232236146927,
0.05466850847005844,
-0.012350518256425858,
0.04915313422679901,
0.0245... | 0.018315 |
Structured outputs let you enforce a JSON schema on model responses so you can reliably extract structured data, describe images, or keep every reply consistent. ## Generating structured JSON ```shell curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{ "model": "gpt-oss", "messages": ... | https://github.com/ollama/ollama/blob/main//docs/capabilities/structured-outputs.mdx | main | ollama | [
-0.0353856235742569,
0.07860638201236725,
0.0246784295886755,
0.06421861797571182,
0.012775180861353874,
-0.13591212034225464,
-0.003446189220994711,
-0.006492525339126587,
0.043206583708524704,
-0.06518012285232544,
-0.012981902807950974,
-0.01761712320148945,
-0.05752546340227127,
0.0904... | -0.000351 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.