moelove's picture
docs: update README to enhance feature descriptions and add chat profile configuration details
77aaa1f
|
raw
history blame
3.42 kB
metadata
title: Thinking Model Client
emoji: πŸ€–
colorFrom: blue
colorTo: blue
sdk: docker
sdk_version: 1.0.0
app_file: Dockerfile
pinned: false

Thinking Model Client πŸ§ πŸ€–

License Node.js Version Docker

A modern React-based chat application that provides a unique interface for interacting with AI models. The application not only displays model responses but also visualizes the thinking process behind each response, giving users insight into how the AI arrives at its conclusions.

Table of Contents

Features ✨

  • 🧠 Thinking Process Visualization: See the step-by-step reasoning behind each AI response with interactive visualizations
  • πŸ”— Flexible API Integration: Easily connect to different AI models through configurable API endpoints
  • πŸ’Ύ Conversation Persistence: All chats are automatically saved in local storage for continuity
  • 🐳 Docker Deployment: Ready for containerized deployment with included Docker configuration
  • βš™οΈ Customizable Settings: Adjust API parameters and model configurations through an intuitive settings panel
  • πŸ’¬ Real-time Chat: Modern interface with smooth animations and multiple conversation tabs
  • πŸ€– Multiple Models: Support for various AI model integrations through a unified interface
  • πŸ› οΈ Modern Stack: Built with React and Vite for optimal performance and development experience
  • πŸ§ͺ Quality Assured: Comprehensive unit tests ensure reliable functionality
  • πŸ”’ Local Data Storage: All data is stored locally for enhanced privacy and security

Getting Started

Prerequisites

  • Node.js (v14 or higher)
  • npm or yarn

Installation

  1. Clone the repository:
git clone https://github.com/tao12345666333/thinking-model-client.git
cd thinking-model-client
  1. Install dependencies:
npm install
  1. Start the development server:
npm start

This will concurrently run both the frontend development server and the backend proxy server.

  1. Open your browser and navigate to http://localhost:5173 to use the application.

Configuration

The application can be configured through the settings panel, which supports multiple profiles:

Chat Profiles

Each chat profile includes:

  • Profile Name: Custom name for the profile
  • API Endpoint: The endpoint for the AI model
    • Ends with / β†’ /chat/completions will be appended
    • Ends with # β†’ # will be removed
    • Other cases β†’ /v1/chat/completions will be appended
  • API Key: Your authentication key for the API
  • Model Name: The model to use (e.g., DeepSeek-R1)

Summarization Profile

A separate profile for conversation summarization:

  • API Endpoint: Endpoint for the summarization service
  • API Key: Authentication key for summarization
  • Model Name: The model to use for summarization

All settings are stored locally for privacy and security. You can manage multiple chat profiles and switch between them as needed.