Spaces:
Runtime error
Runtime error
File size: 4,993 Bytes
863741a 77aaa1f 31abbc8 ddac6a2 77aaa1f ddac6a2 77aaa1f 31abbc8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
title: Thinking Model Client
emoji: π€
colorFrom: blue
colorTo: blue
sdk: docker
sdk_version: 1.0.0
app_file: Dockerfile
pinned: false
---
# Thinking Model Client π§ π€
[](https://opensource.org/licenses/MIT)
[](https://nodejs.org/)
[](https://www.docker.com/)
A modern React-based chat application that provides a unique interface for interacting with AI models. The application not only displays model responses but also visualizes the thinking process behind each response, giving users insight into how the AI arrives at its conclusions.
## Table of Contents
- [Features](#features)
- [Getting Started](#getting-started)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Configuration](#configuration)
## Features β¨
- π§ **Thinking Process Visualization**: See the step-by-step reasoning behind each AI response with interactive visualizations
- π **Flexible API Integration**: Easily connect to different AI models through configurable API endpoints
- πΎ **Conversation Persistence**: All chats are automatically saved in local storage for continuity
- π³ **Docker Deployment**: Ready for containerized deployment with included Docker configuration
- βοΈ **Customizable Settings**: Adjust API parameters and model configurations through an intuitive settings panel
- π¬ **Real-time Chat**: Modern interface with smooth animations and multiple conversation tabs
- π€ **Multiple Models**: Support for various AI model integrations through a unified interface
- π οΈ **Modern Stack**: Built with React and Vite for optimal performance and development experience
- π§ͺ **Quality Assured**: Comprehensive unit tests ensure reliable functionality
- π **Local Data Storage**: All data is stored locally for enhanced privacy and security
- β‘ **xsai Integration**: Powered by xsai (extra-small AI SDK) for efficient and lightweight AI model connections
- π§© **Reasoning Extraction**: Automatic extraction and visualization of AI reasoning processes using xsai utilities
## Getting Started
### Prerequisites
- Node.js (v14 or higher)
- npm or yarn
### Installation
1. Clone the repository:
```bash
git clone https://github.com/tao12345666333/thinking-model-client.git
cd thinking-model-client
```
2. Install dependencies:
```bash
npm install
```
3. Start the development server:
```bash
npm start
```
This will concurrently run both the frontend development server and the backend proxy server.
4. Open your browser and navigate to `http://localhost:5173` to use the application.
## Configuration
The application can be configured through the settings panel, which supports multiple profiles:
### Chat Profiles
Each chat profile includes:
- **Profile Name**: Custom name for the profile
- **API Endpoint**: The endpoint for the AI model
- Ends with `/` β `/chat/completions` will be appended
- Ends with `#` β `#` will be removed
- Other cases β `/v1/chat/completions` will be appended
- **API Key**: Your authentication key for the API
- **Model Name**: The model to use (e.g., DeepSeek-R1)
### Summarization Profile
A separate profile for conversation summarization:
- **API Endpoint**: Endpoint for the summarization service
- **API Key**: Authentication key for summarization
- **Model Name**: The model to use for summarization
All settings are stored locally for privacy and security. You can manage multiple chat profiles and switch between them as needed.
## xsai Integration π€
This application now uses [xsai](https://github.com/moeru-ai/xsai) - an extra-small AI SDK for efficient LLM connections. The integration provides:
### Key Benefits
- **Lightweight**: Minimal dependencies and small bundle size
- **Runtime Agnostic**: Works in Node.js, Deno, Bun, and browsers
- **Streaming Support**: Built-in streaming capabilities for real-time responses
- **Reasoning Extraction**: Automatic extraction of thinking processes from model responses
### Technical Implementation
- **Chat Streaming**: Uses `@xsai/stream-text` for real-time message streaming
- **Summarization**: Uses `@xsai/generate-text` for conversation title generation
- **Reasoning Processing**: Uses `@xsai/utils-reasoning` to extract and display thinking processes
### Testing xsai Integration
To test the xsai integration independently:
1. Edit the `test-xsai.js` file with your API credentials
2. Run the test script:
```bash
node test-xsai.js
```
This will test both text generation and streaming with reasoning extraction.
### Migration from node-fetch
The application has been migrated from using `node-fetch` directly to using xsai's abstraction layer. This provides:
- Better error handling
- Consistent API across different model providers
- Built-in streaming utilities
- Simplified reasoning extraction
|