content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
Streaming allows you to render text as it is produced by the model. Streaming is enabled by default through the REST API, but disabled by default in the SDKs. To enable streaming in the SDKs, set the `stream` parameter to `True`. ## Key streaming concepts 1. Chatting: Stream partial assistant messages. Each chunk inclu... | https://github.com/ollama/ollama/blob/main//docs/capabilities/streaming.mdx | main | ollama | [
-0.03611624240875244,
-0.012033410370349884,
-0.010673731565475464,
0.023737262934446335,
0.031636014580726624,
-0.0956764966249466,
0.003301618853583932,
0.0032725417986512184,
0.08369556814432144,
-0.050677552819252014,
-0.034579288214445114,
-0.04331526532769203,
-0.06273860484361649,
0... | 0.08423 |
Embeddings turn text into numeric vectors you can store in a vector database, search with cosine similarity, or use in RAG pipelines. The vector length depends on the model (typically 384–1024 dimensions). ## Recommended models - [embeddinggemma](https://ollama.com/library/embeddinggemma) - [qwen3-embedding](https://ol... | https://github.com/ollama/ollama/blob/main//docs/capabilities/embeddings.mdx | main | ollama | [
-0.02541196532547474,
0.005719808395951986,
-0.03444680571556091,
-0.0212832260876894,
-0.04352036491036415,
-0.01705872267484665,
-0.09840095788240433,
0.05137601122260094,
0.06281999498605728,
-0.03904338553547859,
0.019591188058257103,
-0.02888694405555725,
0.046031173318624496,
0.03491... | 0.041888 |
Vision models accept images alongside text so the model can describe, classify, and answer questions about what it sees. ## Quick start ```shell ollama run gemma3 ./image.png whats in this image? ``` ## Usage with Ollama's API Provide an `images` array. SDKs accept file paths, URLs or raw bytes while the REST API expec... | https://github.com/ollama/ollama/blob/main//docs/capabilities/vision.mdx | main | ollama | [
-0.009291601367294788,
0.04830854758620262,
0.018439846113324165,
0.06088364124298096,
0.03799691051244736,
-0.1305568367242813,
-0.035608939826488495,
-0.01102954987436533,
0.075391486287117,
-0.03980186954140663,
0.03214477002620697,
-0.02161143533885479,
-0.006961123552173376,
0.0586883... | 0.116425 |
Ollama supports tool calling (also known as function calling) which allows a model to invoke tools and incorporate their results into its replies. ## Calling a single tool Invoke a single tool and include its response in a follow-up request. Also known as "single-shot" tool calling. ```shell curl -s http://localhost:11... | https://github.com/ollama/ollama/blob/main//docs/capabilities/tool-calling.mdx | main | ollama | [
-0.06712688505649567,
0.06637024879455566,
-0.03370159864425659,
0.09558677673339844,
-0.060826871544122696,
-0.12835513055324554,
-0.03697986528277397,
-0.010882753878831863,
0.04811714589595795,
-0.01849771849811077,
-0.023539278656244278,
-0.06637947261333466,
-0.055387288331985474,
0.0... | 0.113585 |
"stream": false, "tools": [ { "type": "function", "function": { "name": "get\_temperature", "description": "Get the current temperature for a city", "parameters": { "type": "object", "required": ["city"], "properties": { "city": {"type": "string", "description": "The name of the city"} } } } }, { "type": "function", "f... | https://github.com/ollama/ollama/blob/main//docs/capabilities/tool-calling.mdx | main | ollama | [
-0.03177408128976822,
0.10090915113687515,
0.005042571574449539,
0.06075725331902504,
-0.0412621796131134,
-0.06675764918327332,
-0.0067392135970294476,
-0.024514010176062584,
-0.01168968714773655,
0.017954936251044273,
-0.03407605364918709,
-0.15120677649974823,
0.029727458953857422,
-0.0... | 0.017405 |
['city'], properties: { city: { type: 'string', description: 'The name of the city' }, }, }, }, } ] const messages = [{ role: 'user', content: 'What are the current weather conditions and temperature in New York and London?' }] const response = await ollama.chat({ model: 'qwen3', messages, tools, think: true }) // add ... | https://github.com/ollama/ollama/blob/main//docs/capabilities/tool-calling.mdx | main | ollama | [
-0.010962681844830513,
0.05859490856528282,
0.02152552269399166,
0.10917231440544128,
-0.05931689962744713,
-0.088807612657547,
0.04118425399065018,
-0.005799668841063976,
-0.005146120209246874,
-0.05005863308906555,
-0.04508697986602783,
-0.10115238279104233,
-0.11398738622665405,
0.07241... | 0.062033 |
of toolCalls) { const fn = availableFunctions[call.function.name as ToolName] if (!fn) { continue } const args = call.function.arguments as { a: number; b: number } console.log(`Calling ${call.function.name} with arguments`, args) const result = fn(args.a, args.b) console.log(`Result: ${result}`) messages.push({ role: ... | https://github.com/ollama/ollama/blob/main//docs/capabilities/tool-calling.mdx | main | ollama | [
-0.05039163678884506,
0.02140321023762226,
-0.03468780964612961,
0.1009778305888176,
-0.05828124284744263,
-0.09448087960481644,
0.07023123651742935,
0.020793607458472252,
0.03465805575251579,
-0.02714899182319641,
-0.03764001652598381,
-0.05112632364034653,
-0.046378348022699356,
0.014145... | 0.091705 |
temperature for a city Args: city: The name of the city Returns: The current temperature for the city """ temperatures = { 'New York': '22°C', 'London': '15°C', } return temperatures.get(city, 'Unknown') available\_functions = { 'get\_temperature': get\_temperature, } # directly pass the function as part of the tools l... | https://github.com/ollama/ollama/blob/main//docs/capabilities/tool-calling.mdx | main | ollama | [
-0.012166975066065788,
0.03021438792347908,
0.04998432472348213,
0.0878964364528656,
-0.01415405236184597,
-0.08843937516212463,
0.04956541582942009,
-0.036635514348745346,
-0.00275485054589808,
-0.003349991049617529,
0.011139800772070885,
-0.12311141192913055,
-0.05189186707139015,
0.0088... | 0.025221 |
Thinking-capable models emit a `thinking` field that separates their reasoning trace from the final answer. Use this capability to audit model steps, animate the model \*thinking\* in a UI, or hide the trace entirely when you only need the final response. ## Supported models - [Qwen 3](https://ollama.com/library/qwen3)... | https://github.com/ollama/ollama/blob/main//docs/capabilities/thinking.mdx | main | ollama | [
-0.07508975267410278,
-0.0656646266579628,
0.022271549329161644,
0.046738240867853165,
0.06038306653499603,
-0.10059650242328644,
0.005307452287524939,
0.034768957644701004,
0.05851346254348755,
-0.04474779963493347,
-0.04640760272741318,
-0.0236667487770319,
-0.05720806121826172,
-0.00333... | 0.092024 |
VS Code includes built-in AI chat through GitHub Copilot Chat. Ollama models can be used directly in the Copilot Chat model picker.  ## Prerequisites - Ollama v0.18.3+ - [VS Code 1.113+](https://code.visualstudio.com/download) - [GitHub Copilot Chat extension 0.41.0+](https://m... | https://github.com/ollama/ollama/blob/main//docs/integrations/vscode.mdx | main | ollama | [
-0.09001448005437851,
-0.08742325007915497,
-0.012175098992884159,
0.09175707399845123,
-0.02354682981967926,
-0.04813416674733162,
-0.015129529871046543,
0.056346941739320755,
0.06281090527772903,
0.0184016190469265,
0.011917132884263992,
-0.06476711481809616,
-0.09483404457569122,
0.0388... | 0.133746 |
Pi is a minimal and extensible coding agent. ## Install Install [Pi](https://github.com/badlogic/pi-mono): ```bash npm install -g @mariozechner/pi-coding-agent ``` ## Usage with Ollama ### Quick setup ```bash ollama launch pi ``` This installs Pi, configures Ollama as a provider including web tools, and drops you into ... | https://github.com/ollama/ollama/blob/main//docs/integrations/pi.mdx | main | ollama | [
-0.05105339363217354,
-0.022410811856389046,
-0.05749485641717911,
0.03164659067988396,
-0.04791229963302612,
-0.03145630285143852,
-0.03742934763431549,
0.056097183376550674,
0.04089278355240822,
0.07727853953838348,
0.02325274422764778,
-0.02262178435921669,
0.016311265528202057,
-0.0229... | 0.153299 |
## Install Install [Cline](https://docs.cline.bot/getting-started/installing-cline) in your IDE. ## Usage with Ollama 1. Open Cline settings > `API Configuration` and set `API Provider` to `Ollama` 2. Select a model under `Model` or type one (e.g. `qwen3`) 3. Update the context window to at least 32K tokens under `Cont... | https://github.com/ollama/ollama/blob/main//docs/integrations/cline.mdx | main | ollama | [
-0.0784517228603363,
-0.048594363033771515,
-0.05388801544904709,
-0.020675141364336014,
-0.05848460644483566,
-0.08580130338668823,
-0.027262935414910316,
0.052396032959222794,
0.04219891130924225,
-0.02820352278649807,
-0.01994885317981243,
-0.06540525704622269,
0.016764331609010696,
-0.... | 0.079208 |
## Install Install [Zed](https://zed.dev/download). ## Usage with Ollama 1. In Zed, click the \*\*star icon\*\* in the bottom-right corner, then select \*\*Configure\*\*.2. Under \*\*LLM Providers\*\*, choose \*\*Ollama\*\* 3. Confirm the \*\*Host URL\*\* is `http://localhost:11434`, then click \*\*Connect\*\* 4. Once ... | https://github.com/ollama/ollama/blob/main//docs/integrations/zed.mdx | main | ollama | [
-0.05509162321686745,
0.024555791169404984,
-0.08123525232076645,
0.029139013960957527,
-0.03173670172691345,
-0.09510696679353714,
-0.056498415768146515,
-0.03869927302002907,
-0.021189728751778603,
-0.0019905813969671726,
0.032461125403642654,
0.0055897776037454605,
-0.028622932732105255,
... | 0.006202 |
NemoClaw is NVIDIA's open source security stack for [OpenClaw](/integrations/openclaw). It wraps OpenClaw with the NVIDIA OpenShell runtime to provide kernel-level sandboxing, network policy controls, and audit trails for AI agents. ## Quick start Pull a model: ```bash ollama pull nemotron-3-nano:30b ``` Run the instal... | https://github.com/ollama/ollama/blob/main//docs/integrations/nemoclaw.mdx | main | ollama | [
-0.08284580707550049,
0.002802507486194372,
-0.08570531755685806,
0.004986864980310202,
-0.0025157909840345383,
-0.10237330198287964,
-0.07179281860589981,
0.005948536563664675,
-0.013671502470970154,
-0.0397946797311306,
0.04906613379716873,
-0.07750041782855988,
-0.043796733021736145,
0.... | 0.251926 |
Ollama integrates with a wide range of tools. ## Coding Agents Coding assistants that can read, modify, and execute code in your projects. - [Claude Code](/integrations/claude-code) - [Codex](/integrations/codex) - [OpenCode](/integrations/opencode) - [Droid](/integrations/droid) - [Goose](/integrations/goose) - [Pi](/... | https://github.com/ollama/ollama/blob/main//docs/integrations/index.mdx | main | ollama | [
-0.07957150042057037,
-0.05326253920793533,
-0.007365697994828224,
0.01404128223657608,
-0.004818964749574661,
-0.10921957343816757,
-0.012902377173304558,
0.03927149996161461,
-0.03265562281012535,
-0.022064177319407463,
0.0003494933189358562,
-0.02202945202589035,
-0.04784136265516281,
-... | 0.215129 |
## Install Install [XCode](https://developer.apple.com/xcode/) ## Usage with Ollama Ensure Apple Intelligence is setup and the latest XCode version is v26.0 1. Click \*\*XCode\*\* in top left corner > \*\*Settings\*\*2. Select \*\*Locally Hosted\*\*, enter port \*\*11434\*\* and click \*\*Add\*\*3. Select the \*\*star ... | https://github.com/ollama/ollama/blob/main//docs/integrations/xcode.mdx | main | ollama | [
0.009606242179870605,
-0.05962353199720383,
-0.03972768038511276,
0.033531397581100464,
-0.015707846730947495,
-0.07257094979286194,
-0.08214922249317169,
-0.04303506016731262,
-0.05667292699217796,
0.017966141924262047,
0.039113376289606094,
-0.0461801141500473,
-0.05782820284366608,
0.00... | 0.084251 |
## Overview [Onyx](http://onyx.app/) is a self-hostable Chat UI that integrates with all Ollama models. Features include: - Creating custom Agents - Web search - Deep Research - RAG over uploaded documents and connected apps - Connectors to applications like Google Drive, Email, Slack, etc. - MCP and OpenAPI Actions su... | https://github.com/ollama/ollama/blob/main//docs/integrations/onyx.mdx | main | ollama | [
-0.0641019195318222,
-0.02595948986709118,
-0.04358317703008652,
0.04094063863158226,
0.06311766803264618,
-0.1317848116159439,
-0.01078970916569233,
0.036852963268756866,
-0.004240464419126511,
-0.01868388056755066,
0.03197268769145012,
0.058095332235097885,
0.02608352340757847,
0.0066458... | 0.09382 |
## Install Install [n8n](https://docs.n8n.io/choose-n8n/). ## Using Ollama Locally 1. In the top right corner, click the dropdown and select \*\*Create Credential\*\*2. Under \*\*Add new credential\*\* select \*\*Ollama\*\*3. Confirm Base URL is set to `http://localhost:11434` if running locally or `http://host.docker.... | https://github.com/ollama/ollama/blob/main//docs/integrations/n8n.mdx | main | ollama | [
-0.02965305559337139,
0.04242273047566414,
0.00785280205309391,
0.001028086873702705,
-0.05232391878962517,
-0.11585067957639694,
-0.08425269275903702,
0.017490634694695473,
0.012829344719648361,
0.00009033867536345497,
-0.017711259424686432,
-0.06987282633781433,
-0.04042070731520653,
0.0... | 0.013948 |
OpenClaw is a personal AI assistant that runs on your own devices. It bridges messaging services (WhatsApp, Telegram, Slack, Discord, iMessage, and more) to AI coding agents through a centralized gateway. ## Quick start ```bash ollama launch openclaw ``` Ollama handles everything automatically: 1. \*\*Install\*\* — If ... | https://github.com/ollama/ollama/blob/main//docs/integrations/openclaw.mdx | main | ollama | [
-0.09295070171356201,
-0.0397503562271595,
-0.0843336433172226,
0.04918711259961128,
0.002341176848858595,
-0.10987748205661774,
-0.06254014372825623,
0.005472441203892231,
0.025333037599921227,
0.010854410007596016,
0.02226656675338745,
-0.02138042449951172,
-0.017911748960614204,
-0.0368... | 0.19657 |
## Install Install the [Codex CLI](https://developers.openai.com/codex/cli/): ``` npm install -g @openai/codex ``` ## Usage with Ollama Codex requires a larger context window. It is recommended to use a context window of at least 64k tokens. ### Quick setup ``` ollama launch codex ``` To configure without launching: ``... | https://github.com/ollama/ollama/blob/main//docs/integrations/codex.mdx | main | ollama | [
-0.04501822218298912,
-0.010666156187653542,
-0.08227290213108063,
0.011470342986285686,
0.004337985068559647,
-0.0966927632689476,
-0.053880833089351654,
0.07050257176160812,
0.005360162351280451,
0.007430948782712221,
0.00951734371483326,
-0.041053734719753265,
-0.017579659819602966,
-0.... | 0.057878 |
## Install Install [Roo Code](https://marketplace.visualstudio.com/items?itemName=RooVeterinaryInc.roo-cline) from the VS Code Marketplace. ## Usage with Ollama 1. Open Roo Code in VS Code and click the \*\*gear icon\*\* on the top right corner of the Roo Code window to open \*\*Provider Settings\*\* 2. Set `API Provid... | https://github.com/ollama/ollama/blob/main//docs/integrations/roo-code.mdx | main | ollama | [
-0.035075630992650986,
-0.06047536060214043,
-0.11131862550973892,
0.08358657360076904,
0.016143672168254852,
-0.0628928542137146,
-0.10625949501991272,
0.04425613582134247,
0.009191195480525494,
-0.007771492004394531,
0.011879422701895237,
-0.013420866802334785,
-0.03939807787537575,
-0.0... | 0.041292 |
Claude Code is Anthropic's agentic coding tool that can read, modify, and execute code in your working directory. Open models can be used with Claude Code through Ollama's Anthropic-compatible API, enabling you to use models such as `qwen3.5`, `glm-5:cloud`, `kimi-k2.5:cloud`. : ```bash curl -fsSL https://app.factory.ai/cli | sh ``` Droid requires a larger context window. It is recommended to use a context window of at least 64k tokens. See [Context length](/context-length) for more information. ## Usage with Ollama ### Quick setup ```ba... | https://github.com/ollama/ollama/blob/main//docs/integrations/droid.mdx | main | ollama | [
-0.045875199139118195,
-0.005744450725615025,
0.013216057792305946,
-0.08669254183769226,
-0.05599778890609741,
-0.060302313417196274,
-0.080350860953331,
0.13761243224143982,
-0.011901482939720154,
-0.021323837339878082,
0.04985325038433075,
-0.07777708768844604,
0.0035594527143985033,
0.... | 0.041617 |
OpenCode is an open-source AI coding assistant that runs in your terminal. ## Install Install the [OpenCode CLI](https://opencode.ai): ```bash curl -fsSL https://opencode.ai/install | bash ``` OpenCode requires a larger context window. It is recommended to use a context window of at least 64k tokens. See [Context lengt... | https://github.com/ollama/ollama/blob/main//docs/integrations/opencode.mdx | main | ollama | [
0.01717148907482624,
-0.010642733424901962,
-0.10553954541683197,
0.04921763017773628,
0.004587437026202679,
-0.07154218852519989,
-0.06349549442529678,
0.06429814547300339,
0.036658164113759995,
-0.007105548866093159,
0.0028064956422895193,
-0.014858128502964973,
-0.087405726313591,
0.002... | 0.220986 |
Hermes Agent is a self-improving AI agent built by Nous Research. It features automatic skill creation, cross-session memory, and connects messaging platforms (Telegram, Discord, Slack, WhatsApp, Signal, Email) to models through a unified gateway. ## Quick start ### Pull a model Before running the setup wizard, make su... | https://github.com/ollama/ollama/blob/main//docs/integrations/hermes.mdx | main | ollama | [
-0.05085118114948273,
-0.028728490695357323,
-0.04907550290226936,
-0.0040016863495111465,
-0.055930159986019135,
-0.07781334221363068,
-0.038470201194286346,
-0.016070924699306488,
0.029070092365145683,
0.04953053221106529,
0.008460532873868942,
-0.01816737651824951,
0.05430912226438522,
... | 0.196809 |
## Install Install [marimo](https://marimo.io). You can use `pip` or `uv` for this. You can also use `uv` to create a sandboxed environment for marimo by running: ``` uvx marimo edit --sandbox notebook.py ``` ## Usage with Ollama 1. In marimo, go to the user settings and go to the AI tab. From here you can find and con... | https://github.com/ollama/ollama/blob/main//docs/integrations/marimo.mdx | main | ollama | [
-0.035762153565883636,
-0.013870704919099808,
-0.04565691947937012,
0.022677510976791382,
0.02899787202477455,
-0.043164074420928955,
-0.029234889894723892,
0.05064702406525612,
-0.01323128491640091,
-0.030154787003993988,
0.03228822350502014,
-0.031921979039907455,
-0.016420753672719002,
... | 0.207051 |
This example uses \*\*IntelliJ\*\*; same steps apply to other JetBrains IDEs (e.g., PyCharm). ## Install Install [IntelliJ](https://www.jetbrains.com/idea/). ## Usage with Ollama To use \*\*Ollama\*\*, you will need a [JetBrains AI Subscription](https://www.jetbrains.com/ai-ides/buy/?section=personal&billing=yearly). 1... | https://github.com/ollama/ollama/blob/main//docs/integrations/jetbrains.mdx | main | ollama | [
-0.08624786883592606,
-0.07114212214946747,
-0.021550962701439857,
0.029501112177968025,
-0.022332750260829926,
-0.01267119962722063,
-0.019348645582795143,
-0.006052129901945591,
0.00912946555763483,
-0.07359353452920914,
0.016597343608736992,
-0.05062275752425194,
-0.07961421459913254,
0... | 0.209175 |
## Goose Desktop Install [Goose](https://block.github.io/goose/docs/getting-started/installation/) Desktop. ### Usage with Ollama 1. In Goose, open \*\*Settings\*\* → \*\*Configure Provider\*\*.2. Find \*\*Ollama\*\*, click \*\*Configure\*\* 3. Confirm \*\*API Host\*\* is `http://localhost:11434` and click Submit ### C... | https://github.com/ollama/ollama/blob/main//docs/integrations/goose.mdx | main | ollama | [
-0.0925910696387291,
-0.0341634675860405,
-0.07219167798757553,
-0.000693978276103735,
-0.014935977756977081,
-0.08703190833330154,
-0.08675030618906021,
-0.007924468256533146,
-0.003046231111511588,
-0.060019880533218384,
0.03505503013730049,
-0.014019232243299484,
-0.08235578238964081,
-... | -0.025897 |
Certain API endpoints stream responses by default, such as `/api/generate`. These responses are provided in the newline-delimited JSON format (i.e. the `application/x-ndjson` content type). For example: ```json {"model":"gemma3","created\_at":"2025-10-26T17:15:24.097767Z","response":"That","done":false} {"model":"gemma... | https://github.com/ollama/ollama/blob/main//docs/api/streaming.mdx | main | ollama | [
-0.07212688773870468,
0.008659245446324348,
0.017529070377349854,
-0.001328409998677671,
0.0074849254451692104,
-0.04434001445770264,
-0.04873806983232498,
-0.042441800236701965,
0.06628303229808807,
-0.010079823434352875,
-0.014444796368479729,
-0.07271631807088852,
-0.053339384496212006,
... | 0.099386 |
Ollama provides compatibility with the [Anthropic Messages API](https://docs.anthropic.com/en/api/messages) to help connect existing applications to Ollama, including tools like Claude Code. ## Usage ### Environment variables To use Ollama with tools that expect the Anthropic API (like Claude Code), set these environme... | https://github.com/ollama/ollama/blob/main//docs/api/anthropic-compatibility.mdx | main | ollama | [
-0.046388473361730576,
0.04492592439055443,
-0.045111384242773056,
0.03803589940071106,
-0.04248697683215141,
-0.15747517347335815,
-0.019940601661801338,
0.0034001523163169622,
0.015226418152451515,
-0.03632918745279312,
0.032089125365018845,
-0.05562318116426468,
-0.01634029671549797,
0.... | 0.048947 |
backend. ### Recommended models For coding use cases, models like `glm-4.7`, `minimax-m2.1`, and `qwen3-coder` are recommended. Download a model before use: ```shell ollama pull qwen3-coder ``` > Note: Qwen 3 coder is a 30B parameter model requiring at least 24GB of VRAM to run smoothly. More is required for longer con... | https://github.com/ollama/ollama/blob/main//docs/api/anthropic-compatibility.mdx | main | ollama | [
0.00785920675843954,
-0.03513287752866745,
-0.08845531940460205,
-0.025677204132080078,
-0.07897640764713287,
-0.0495423749089241,
-0.09992336481809616,
0.05020378902554512,
-0.03099318966269493,
0.06363708525896072,
0.04113534837961197,
-0.06331921368837357,
-0.03619154915213585,
-0.01200... | -0.009041 |
support | `document` content blocks with PDF files | | Server-sent errors | `error` events during streaming (errors return HTTP status) | ### Partial support | Feature | Status | |---------|--------| | Image content | Base64 images supported; URL images not supported | | Extended thinking | Basic support; `budget\_toke... | https://github.com/ollama/ollama/blob/main//docs/api/anthropic-compatibility.mdx | main | ollama | [
-0.03338278830051422,
0.0027462998405098915,
-0.059252671897411346,
-0.02521662414073944,
0.11722945421934128,
0.015568522736430168,
-0.0693294033408165,
0.039426159113645554,
-0.061564475297927856,
0.0163883026689291,
0.004267868585884571,
0.028333161026239395,
-0.004850766155868769,
0.02... | 0.107246 |
No authentication is required when accessing Ollama's API locally via `http://localhost:11434`. Authentication is required for the following: \* Running cloud models via ollama.com \* Publishing models \* Downloading private models Ollama supports two authentication methods: \* \*\*Signing in\*\*: sign in from your loc... | https://github.com/ollama/ollama/blob/main//docs/api/authentication.mdx | main | ollama | [
-0.0696077048778534,
0.004247769713401794,
-0.056860484182834625,
0.012877149507403374,
-0.04502205550670624,
-0.09055876731872559,
-0.029641801491379738,
0.013479605317115784,
0.04792335256934166,
0.008692165836691856,
0.04323895275592804,
0.005462190136313438,
-0.0336129404604435,
-0.019... | 0.003807 |
## Status codes Endpoints return appropriate HTTP status codes based on the success or failure of the request in the HTTP status line (e.g. `HTTP/1.1 200 OK` or `HTTP/1.1 400 Bad Request`). Common status codes are: - `200`: Success - `400`: Bad Request (missing parameters, invalid JSON, etc.) - `404`: Not Found (model ... | https://github.com/ollama/ollama/blob/main//docs/api/errors.mdx | main | ollama | [
-0.095962293446064,
0.009458105079829693,
0.0030042240396142006,
0.012818986549973488,
-0.01874234527349472,
-0.07287000864744186,
-0.09943355619907379,
0.006417629774659872,
0.02887708507478237,
0.043462373316287994,
0.027849113568663597,
0.011737670749425888,
-0.0030125544872134924,
0.01... | 0.082424 |
Ollama's API allows you to run and interact with models programatically. ## Get started If you're just getting started, follow the [quickstart](/quickstart) documentation to get up and running with Ollama's API. ## Base URL After installation, Ollama's API is served by default at: ``` http://localhost:11434/api ``` For... | https://github.com/ollama/ollama/blob/main//docs/api/introduction.mdx | main | ollama | [
-0.11080898344516754,
-0.037587832659482956,
-0.03975200280547142,
0.06131613254547119,
-0.031963370740413666,
-0.1003287136554718,
-0.0903446152806282,
0.017714569345116615,
0.03795319050550461,
-0.041853465139865875,
0.0023487152066081762,
-0.004102776758372784,
-0.028720960021018982,
-0... | 0.091937 |
Ollama's API responses include metrics that can be used for measuring performance and model usage: \* `total\_duration`: How long the response took to generate \* `load\_duration`: How long the model took to load \* `prompt\_eval\_count`: How many input tokens were processed \* `prompt\_eval\_duration`: How long it too... | https://github.com/ollama/ollama/blob/main//docs/api/usage.mdx | main | ollama | [
-0.0674109011888504,
0.03649909049272537,
-0.05976882204413414,
0.09247732162475586,
-0.029982002452015877,
-0.11654660105705261,
-0.03588658943772316,
0.037280790507793427,
0.07701615244150162,
-0.04417886212468147,
-0.035315848886966705,
-0.07045213133096695,
-0.06484679132699966,
-0.017... | 0.160216 |
Ollama provides compatibility with parts of the [OpenAI API](https://platform.openai.com/docs/api-reference) to help connect existing applications to Ollama. ## Usage ### Simple `/v1/chat/completions` example ```python basic.py from openai import OpenAI client = OpenAI( base\_url='http://localhost:11434/v1/', api\_key=... | https://github.com/ollama/ollama/blob/main//docs/api/openai-compatibility.mdx | main | ollama | [
-0.06927517801523209,
-0.01605592481791973,
-0.0033810571767389774,
0.053475212305784225,
0.006004530005156994,
-0.12208539247512817,
0.01914708875119686,
0.004732102621346712,
0.04262055829167366,
-0.08065810799598694,
0.012510331347584724,
0.02521967701613903,
-0.051206160336732864,
0.08... | 0.071621 |
`seed` - [x] `stop` - [x] `stream` - [x] `stream\_options` - [x] `include\_usage` - [x] `temperature` - [x] `top\_p` - [x] `max\_tokens` - [x] `suffix` - [ ] `best\_of` - [ ] `echo` - [ ] `logit\_bias` - [ ] `user` - [ ] `n` #### Notes - `prompt` currently only accepts a string ### `/v1/models` #### Notes - `created` c... | https://github.com/ollama/ollama/blob/main//docs/api/openai-compatibility.mdx | main | ollama | [
-0.06779742240905762,
0.01463936548680067,
-0.037886109203100204,
0.008120893500745296,
0.06680642813444138,
-0.025890208780765533,
0.05963089317083359,
0.0763573944568634,
0.0018763917032629251,
-0.03760487958788872,
0.028601890429854393,
-0.0365644171833992,
0.021915297955274582,
-0.0162... | 0.09328 |
context size. Call the API with the updated model name: ```shell curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "mymodel", "messages": [ { "role": "user", "content": "Hello!" } ] }' ``` | https://github.com/ollama/ollama/blob/main//docs/api/openai-compatibility.mdx | main | ollama | [
0.02354089543223381,
0.005658979061990976,
0.051700033247470856,
0.04798147827386856,
-0.03372354805469513,
-0.06647701561450958,
-0.005243029911071062,
0.006335289217531681,
0.04202856123447418,
-0.02427004650235176,
0.0368729792535305,
-0.024631107226014137,
-0.07359743863344193,
0.08800... | 0.074708 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.