Commit History
Refactor file handling for multimodal chat messages 1b26311
MCP (#1981) e67ab0e unverified
Handle empty system messages in preprocessing and OpenAI endpoint 4a83c49
Plus UI dropdown (#1971) 97c1c85 unverified
Add optional chaining for choices array access 2bc6604
Url attachments (#1965) 17d4d70 unverified
Reduce max image upload size to 1MB f96e789
Revert "Url attachments (#1950)" 0167104
Url attachments (#1950) a3c8749 unverified
Set custom User-Agent for HuggingChat requests f78624a
Reduce max image dimensions in OAI parameters schema 9e09cb3
Allow <think> tags in content for DeepSeek R1 d0f5835
Add provider info to router metadata in chat messages 0a53350
Abort (#1924) 4e8a811 unverified
Automatic login (#1900) 248183e unverified
Store oauth token in DB + use it when doing API calls (#1885) c4f6eb3 unverified
fix CI (#1884) 2883086 unverified
Refactor token parameters to standardize naming across models and generation settings 7bdb5ed
Omni multimodality (#1880) 1d8f41b unverified
HuggingChat 2026 (#1875) 4331e77 unverified
feat: add custom headers support to endpoint inference client cc405b1
fix: only import node-llama-cpp if needed and skip for huggingchat image d8e426c
fix: make sure document parser is disabled if not required dcc2568
New `InferenceClient` endpoint type (#1813) e08e6dd unverified
fix: build local llama (#1805) 4ad3b5d unverified
feat: allow storing env variable in DB (#1802) 31daf3d unverified
Fix system message handling to preserve user-configured system prompts (#1764) 6370857 unverified
fix: lint cacbed6
fix: docker image llama.cpp error catching 9fb713b
Add `toolId` to ToolCall and use it in tool results (#1787) 6f95791 unverified
Kelig Kelig LEFEUVRE commited on
fix: build llama.cpp locally in dockerfile (#1788) f75b6cb unverified
feat: add a local endpoint type for inference directly from chat-ui (#1778) 4e9a7a9 unverified
fix(preprocessMessages): clarify web search context in final message content 42aaa16
fix(endpoints): fix for tool calling on hf inference with openai endpoint type (#1754) f84082b unverified
feat(openai): added support for non-streaming o1 (e.g. Azure) models (#1687) 38dffdd unverified
chores(svelte): migration to svelte 5 (#1685) 21b8785 unverified
fix: bedrock prepareMessages parameters (#1686) 8247328 unverified
Anthropic Tool Support (#1594) 1ac3a3c unverified
Add support for Amazon Nova (#1629) d965f78 unverified
fix: disable caching on OpenAI-compatible endpoint type aca14a0 unverified
fix: single step in cohere endpoint type (#1632) 6d0c42a unverified
fix: make single step forcing optional on cohere endpoint type 0d9f3a5
feat(openai): added support for o1 reasoning models (#1618) 98c38e0 unverified
Anthropic PDF Beta Support (#1571) 63c93cf unverified
fix: remove accessToken min length for llama.cpp endpoint ca90fbf
Fix/oai parameters 1552 (#1557) 52dfa8c unverified
evalstate commited on