Spaces:
Runtime error
Runtime error
feat: integrate xsai SDK for LLM provider connections
Browse filesBREAKING CHANGE: Migrate from node-fetch to xsai SDK
- Replace node-fetch with
@xsai
/stream-text,
@xsai
/generate-text,
@xsai
/utils-reasoning
- Enhanced chat endpoint with automatic reasoning extraction
- Improved summarization endpoint with direct text generation
- Added comprehensive test script (test-xsai.js)
- Updated documentation and changelog
- Bump version to 0.2.0
Benefits:
- Lightweight and runtime-agnostic AI SDK
- Automatic extraction of thinking processes
- Better streaming reliability and error handling
- Consistent API across different model providers
- Enhanced reasoning visualization capabilities
- CHANGELOG.md +24 -0
- README.md +38 -0
- XSAI_INTEGRATION_SUMMARY.md +120 -0
- package-lock.json +43 -85
- package.json +4 -2
- server/index.js +94 -66
- test-xsai.js +94 -0
CHANGELOG.md
CHANGED
|
@@ -4,6 +4,30 @@ All notable changes to this project will be documented in this file.
|
|
| 4 |
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
| 5 |
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
| 6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
## [0.1.1] - 2025-03-23
|
| 8 |
|
| 9 |
### Added
|
|
|
|
| 4 |
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
| 5 |
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
| 6 |
|
| 7 |
+
## [0.2.0] - 2025-07-24
|
| 8 |
+
|
| 9 |
+
### Added
|
| 10 |
+
- xsai integration for LLM provider connections
|
| 11 |
+
- Automatic reasoning extraction using `@xsai/utils-reasoning`
|
| 12 |
+
- Test script for xsai integration (`test-xsai.js`)
|
| 13 |
+
- Enhanced documentation for xsai usage
|
| 14 |
+
|
| 15 |
+
### Changed
|
| 16 |
+
- **BREAKING**: Migrated from `node-fetch` to xsai SDK for all AI model connections
|
| 17 |
+
- Chat endpoint now uses `@xsai/stream-text` for streaming responses
|
| 18 |
+
- Summarization endpoint now uses `@xsai/generate-text` for text generation
|
| 19 |
+
- Improved error handling and streaming reliability
|
| 20 |
+
- Enhanced reasoning process extraction and display
|
| 21 |
+
|
| 22 |
+
### Removed
|
| 23 |
+
- `node-fetch` dependency (replaced by xsai packages)
|
| 24 |
+
|
| 25 |
+
### Technical Details
|
| 26 |
+
- Server now uses xsai's `streamText` and `generateText` functions
|
| 27 |
+
- Automatic extraction of thinking processes from model responses
|
| 28 |
+
- Better streaming performance and error handling
|
| 29 |
+
- Runtime-agnostic AI SDK support
|
| 30 |
+
|
| 31 |
## [0.1.1] - 2025-03-23
|
| 32 |
|
| 33 |
### Added
|
README.md
CHANGED
|
@@ -37,6 +37,8 @@ A modern React-based chat application that provides a unique interface for inter
|
|
| 37 |
- π οΈ **Modern Stack**: Built with React and Vite for optimal performance and development experience
|
| 38 |
- π§ͺ **Quality Assured**: Comprehensive unit tests ensure reliable functionality
|
| 39 |
- π **Local Data Storage**: All data is stored locally for enhanced privacy and security
|
|
|
|
|
|
|
| 40 |
|
| 41 |
## Getting Started
|
| 42 |
|
|
@@ -93,3 +95,39 @@ A separate profile for conversation summarization:
|
|
| 93 |
- **Model Name**: The model to use for summarization
|
| 94 |
|
| 95 |
All settings are stored locally for privacy and security. You can manage multiple chat profiles and switch between them as needed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
- π οΈ **Modern Stack**: Built with React and Vite for optimal performance and development experience
|
| 38 |
- π§ͺ **Quality Assured**: Comprehensive unit tests ensure reliable functionality
|
| 39 |
- π **Local Data Storage**: All data is stored locally for enhanced privacy and security
|
| 40 |
+
- β‘ **xsai Integration**: Powered by xsai (extra-small AI SDK) for efficient and lightweight AI model connections
|
| 41 |
+
- π§© **Reasoning Extraction**: Automatic extraction and visualization of AI reasoning processes using xsai utilities
|
| 42 |
|
| 43 |
## Getting Started
|
| 44 |
|
|
|
|
| 95 |
- **Model Name**: The model to use for summarization
|
| 96 |
|
| 97 |
All settings are stored locally for privacy and security. You can manage multiple chat profiles and switch between them as needed.
|
| 98 |
+
|
| 99 |
+
## xsai Integration π€
|
| 100 |
+
|
| 101 |
+
This application now uses [xsai](https://github.com/moeru-ai/xsai) - an extra-small AI SDK for efficient LLM connections. The integration provides:
|
| 102 |
+
|
| 103 |
+
### Key Benefits
|
| 104 |
+
- **Lightweight**: Minimal dependencies and small bundle size
|
| 105 |
+
- **Runtime Agnostic**: Works in Node.js, Deno, Bun, and browsers
|
| 106 |
+
- **Streaming Support**: Built-in streaming capabilities for real-time responses
|
| 107 |
+
- **Reasoning Extraction**: Automatic extraction of thinking processes from model responses
|
| 108 |
+
|
| 109 |
+
### Technical Implementation
|
| 110 |
+
- **Chat Streaming**: Uses `@xsai/stream-text` for real-time message streaming
|
| 111 |
+
- **Summarization**: Uses `@xsai/generate-text` for conversation title generation
|
| 112 |
+
- **Reasoning Processing**: Uses `@xsai/utils-reasoning` to extract and display thinking processes
|
| 113 |
+
|
| 114 |
+
### Testing xsai Integration
|
| 115 |
+
|
| 116 |
+
To test the xsai integration independently:
|
| 117 |
+
|
| 118 |
+
1. Edit the `test-xsai.js` file with your API credentials
|
| 119 |
+
2. Run the test script:
|
| 120 |
+
|
| 121 |
+
```bash
|
| 122 |
+
node test-xsai.js
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
+
This will test both text generation and streaming with reasoning extraction.
|
| 126 |
+
|
| 127 |
+
### Migration from node-fetch
|
| 128 |
+
|
| 129 |
+
The application has been migrated from using `node-fetch` directly to using xsai's abstraction layer. This provides:
|
| 130 |
+
- Better error handling
|
| 131 |
+
- Consistent API across different model providers
|
| 132 |
+
- Built-in streaming utilities
|
| 133 |
+
- Simplified reasoning extraction
|
XSAI_INTEGRATION_SUMMARY.md
ADDED
|
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# xsai Integration Summary
|
| 2 |
+
|
| 3 |
+
## π― Objective
|
| 4 |
+
Migrated the thinking-model-client from using `node-fetch` directly to using the [xsai](https://github.com/moeru-ai/xsai) library for LLM provider connections.
|
| 5 |
+
|
| 6 |
+
## π Changes Made
|
| 7 |
+
|
| 8 |
+
### 1. Dependencies Updated
|
| 9 |
+
- **Added**:
|
| 10 |
+
- `@xsai/stream-text@^0.3.3` - For streaming text responses
|
| 11 |
+
- `@xsai/generate-text@^0.3.3` - For text generation (summarization)
|
| 12 |
+
- `@xsai/utils-reasoning@^0.3.3` - For extracting reasoning from responses
|
| 13 |
+
- **Removed**:
|
| 14 |
+
- `node-fetch` - No longer needed
|
| 15 |
+
|
| 16 |
+
### 2. Server Implementation (`server/index.js`)
|
| 17 |
+
|
| 18 |
+
#### Chat Endpoint (`/api/chat`)
|
| 19 |
+
- **Before**: Used `node-fetch` with manual streaming setup
|
| 20 |
+
- **After**: Uses `xsai.streamText()` with automatic reasoning extraction
|
| 21 |
+
- **Benefits**:
|
| 22 |
+
- Cleaner error handling
|
| 23 |
+
- Automatic reasoning/content separation
|
| 24 |
+
- Better streaming reliability
|
| 25 |
+
- Built-in support for different model formats
|
| 26 |
+
|
| 27 |
+
#### Summarization Endpoint (`/api/summarize`)
|
| 28 |
+
- **Before**: Used `node-fetch` with manual JSON parsing
|
| 29 |
+
- **After**: Uses `xsai.generateText()` for direct text generation
|
| 30 |
+
- **Benefits**:
|
| 31 |
+
- Simplified API calls
|
| 32 |
+
- Better error handling
|
| 33 |
+
- Consistent interface across providers
|
| 34 |
+
|
| 35 |
+
### 3. Key Features Enhanced
|
| 36 |
+
|
| 37 |
+
#### Reasoning Extraction
|
| 38 |
+
- Automatically extracts `<think>...</think>` tags from AI responses
|
| 39 |
+
- Separates reasoning process from final answer
|
| 40 |
+
- Streams reasoning first, then content for better UX
|
| 41 |
+
|
| 42 |
+
#### Streaming Improvements
|
| 43 |
+
- More reliable streaming with better error handling
|
| 44 |
+
- Consistent format across different model providers
|
| 45 |
+
- Automatic handling of different response formats
|
| 46 |
+
|
| 47 |
+
### 4. Testing & Documentation
|
| 48 |
+
|
| 49 |
+
#### Test Script (`test-xsai.js`)
|
| 50 |
+
- Created comprehensive test script to verify xsai integration
|
| 51 |
+
- Tests both `generateText` and `streamText` with reasoning extraction
|
| 52 |
+
- Provides easy way to validate setup with different API providers
|
| 53 |
+
|
| 54 |
+
#### Documentation Updates
|
| 55 |
+
- Updated README.md with xsai integration details
|
| 56 |
+
- Added technical implementation details
|
| 57 |
+
- Enhanced feature descriptions
|
| 58 |
+
- Updated CHANGELOG.md with breaking changes
|
| 59 |
+
- Version bumped to 0.2.0 (breaking change)
|
| 60 |
+
|
| 61 |
+
## π Benefits of xsai Integration
|
| 62 |
+
|
| 63 |
+
### Performance
|
| 64 |
+
- **Lightweight**: Smaller bundle size compared to multiple HTTP client dependencies
|
| 65 |
+
- **Efficient**: Optimized for AI model interactions
|
| 66 |
+
- **Runtime Agnostic**: Works in Node.js, Deno, Bun, and browsers
|
| 67 |
+
|
| 68 |
+
### Developer Experience
|
| 69 |
+
- **Simplified API**: Consistent interface across different model providers
|
| 70 |
+
- **Better Error Handling**: Built-in error handling and retry logic
|
| 71 |
+
- **Type Safety**: Better TypeScript support for AI interactions
|
| 72 |
+
|
| 73 |
+
### Features
|
| 74 |
+
- **Automatic Reasoning Extraction**: Built-in support for thinking processes
|
| 75 |
+
- **Streaming Utilities**: Advanced streaming capabilities
|
| 76 |
+
- **Multiple Providers**: Easily switch between different AI providers
|
| 77 |
+
|
| 78 |
+
## π§ͺ Testing the Integration
|
| 79 |
+
|
| 80 |
+
1. **Start the server**:
|
| 81 |
+
```bash
|
| 82 |
+
npm run server
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
2. **Test with the frontend**:
|
| 86 |
+
```bash
|
| 87 |
+
npm start
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
3. **Run standalone tests**:
|
| 91 |
+
```bash
|
| 92 |
+
node test-xsai.js
|
| 93 |
+
```
|
| 94 |
+
(After updating API credentials in the test file)
|
| 95 |
+
|
| 96 |
+
## π§ Configuration
|
| 97 |
+
|
| 98 |
+
The application maintains the same configuration interface:
|
| 99 |
+
- API endpoints are automatically converted to xsai's baseURL format
|
| 100 |
+
- All existing profiles and settings continue to work
|
| 101 |
+
- No changes required to existing user configurations
|
| 102 |
+
|
| 103 |
+
## π Migration Notes
|
| 104 |
+
|
| 105 |
+
This is a **breaking change** internally but maintains API compatibility:
|
| 106 |
+
- Server endpoints (`/api/chat`, `/api/summarize`) maintain same interface
|
| 107 |
+
- Frontend code requires no changes
|
| 108 |
+
- User configurations remain compatible
|
| 109 |
+
- Docker deployments work without changes
|
| 110 |
+
|
| 111 |
+
## π Result
|
| 112 |
+
|
| 113 |
+
The thinking-model-client now uses xsai for all LLM interactions, providing:
|
| 114 |
+
- More reliable streaming
|
| 115 |
+
- Better reasoning extraction
|
| 116 |
+
- Cleaner codebase
|
| 117 |
+
- Enhanced error handling
|
| 118 |
+
- Future-proof architecture for AI model connections
|
| 119 |
+
|
| 120 |
+
The migration is complete and fully functional!
|
package-lock.json
CHANGED
|
@@ -8,9 +8,11 @@
|
|
| 8 |
"name": "thinking-model-client",
|
| 9 |
"version": "0.1.1",
|
| 10 |
"dependencies": {
|
|
|
|
|
|
|
|
|
|
| 11 |
"cors": "^2.8.5",
|
| 12 |
"express": "^4.18.2",
|
| 13 |
-
"node-fetch": "^3.3.2",
|
| 14 |
"react": "^18.2.0",
|
| 15 |
"react-dom": "^18.2.0",
|
| 16 |
"react-markdown": "^10.0.0"
|
|
@@ -1180,6 +1182,46 @@
|
|
| 1180 |
"vite": "^4.2.0 || ^5.0.0 || ^6.0.0"
|
| 1181 |
}
|
| 1182 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1183 |
"node_modules/accepts": {
|
| 1184 |
"version": "1.3.8",
|
| 1185 |
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz",
|
|
@@ -2192,14 +2234,6 @@
|
|
| 2192 |
"node": ">=0.10"
|
| 2193 |
}
|
| 2194 |
},
|
| 2195 |
-
"node_modules/data-uri-to-buffer": {
|
| 2196 |
-
"version": "4.0.1",
|
| 2197 |
-
"resolved": "https://registry.npmjs.org/data-uri-to-buffer/-/data-uri-to-buffer-4.0.1.tgz",
|
| 2198 |
-
"integrity": "sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==",
|
| 2199 |
-
"engines": {
|
| 2200 |
-
"node": ">= 12"
|
| 2201 |
-
}
|
| 2202 |
-
},
|
| 2203 |
"node_modules/date-fns": {
|
| 2204 |
"version": "2.30.0",
|
| 2205 |
"resolved": "https://registry.npmjs.org/date-fns/-/date-fns-2.30.0.tgz",
|
|
@@ -2732,28 +2766,6 @@
|
|
| 2732 |
"pend": "~1.2.0"
|
| 2733 |
}
|
| 2734 |
},
|
| 2735 |
-
"node_modules/fetch-blob": {
|
| 2736 |
-
"version": "3.2.0",
|
| 2737 |
-
"resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-3.2.0.tgz",
|
| 2738 |
-
"integrity": "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==",
|
| 2739 |
-
"funding": [
|
| 2740 |
-
{
|
| 2741 |
-
"type": "github",
|
| 2742 |
-
"url": "https://github.com/sponsors/jimmywarting"
|
| 2743 |
-
},
|
| 2744 |
-
{
|
| 2745 |
-
"type": "paypal",
|
| 2746 |
-
"url": "https://paypal.me/jimmywarting"
|
| 2747 |
-
}
|
| 2748 |
-
],
|
| 2749 |
-
"dependencies": {
|
| 2750 |
-
"node-domexception": "^1.0.0",
|
| 2751 |
-
"web-streams-polyfill": "^3.0.3"
|
| 2752 |
-
},
|
| 2753 |
-
"engines": {
|
| 2754 |
-
"node": "^12.20 || >= 14.13"
|
| 2755 |
-
}
|
| 2756 |
-
},
|
| 2757 |
"node_modules/figures": {
|
| 2758 |
"version": "3.2.0",
|
| 2759 |
"resolved": "https://registry.npmjs.org/figures/-/figures-3.2.0.tgz",
|
|
@@ -2871,17 +2883,6 @@
|
|
| 2871 |
"node": ">= 6"
|
| 2872 |
}
|
| 2873 |
},
|
| 2874 |
-
"node_modules/formdata-polyfill": {
|
| 2875 |
-
"version": "4.0.10",
|
| 2876 |
-
"resolved": "https://registry.npmjs.org/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz",
|
| 2877 |
-
"integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==",
|
| 2878 |
-
"dependencies": {
|
| 2879 |
-
"fetch-blob": "^3.1.2"
|
| 2880 |
-
},
|
| 2881 |
-
"engines": {
|
| 2882 |
-
"node": ">=12.20.0"
|
| 2883 |
-
}
|
| 2884 |
-
},
|
| 2885 |
"node_modules/forwarded": {
|
| 2886 |
"version": "0.2.0",
|
| 2887 |
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz",
|
|
@@ -4820,41 +4821,6 @@
|
|
| 4820 |
"node": ">= 0.6"
|
| 4821 |
}
|
| 4822 |
},
|
| 4823 |
-
"node_modules/node-domexception": {
|
| 4824 |
-
"version": "1.0.0",
|
| 4825 |
-
"resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz",
|
| 4826 |
-
"integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==",
|
| 4827 |
-
"funding": [
|
| 4828 |
-
{
|
| 4829 |
-
"type": "github",
|
| 4830 |
-
"url": "https://github.com/sponsors/jimmywarting"
|
| 4831 |
-
},
|
| 4832 |
-
{
|
| 4833 |
-
"type": "github",
|
| 4834 |
-
"url": "https://paypal.me/jimmywarting"
|
| 4835 |
-
}
|
| 4836 |
-
],
|
| 4837 |
-
"engines": {
|
| 4838 |
-
"node": ">=10.5.0"
|
| 4839 |
-
}
|
| 4840 |
-
},
|
| 4841 |
-
"node_modules/node-fetch": {
|
| 4842 |
-
"version": "3.3.2",
|
| 4843 |
-
"resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-3.3.2.tgz",
|
| 4844 |
-
"integrity": "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA==",
|
| 4845 |
-
"dependencies": {
|
| 4846 |
-
"data-uri-to-buffer": "^4.0.0",
|
| 4847 |
-
"fetch-blob": "^3.1.4",
|
| 4848 |
-
"formdata-polyfill": "^4.0.10"
|
| 4849 |
-
},
|
| 4850 |
-
"engines": {
|
| 4851 |
-
"node": "^12.20.0 || ^14.13.1 || >=16.0.0"
|
| 4852 |
-
},
|
| 4853 |
-
"funding": {
|
| 4854 |
-
"type": "opencollective",
|
| 4855 |
-
"url": "https://opencollective.com/node-fetch"
|
| 4856 |
-
}
|
| 4857 |
-
},
|
| 4858 |
"node_modules/node-releases": {
|
| 4859 |
"version": "2.0.19",
|
| 4860 |
"resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.19.tgz",
|
|
@@ -6734,14 +6700,6 @@
|
|
| 6734 |
"node": ">=12.0.0"
|
| 6735 |
}
|
| 6736 |
},
|
| 6737 |
-
"node_modules/web-streams-polyfill": {
|
| 6738 |
-
"version": "3.3.3",
|
| 6739 |
-
"resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.3.3.tgz",
|
| 6740 |
-
"integrity": "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==",
|
| 6741 |
-
"engines": {
|
| 6742 |
-
"node": ">= 8"
|
| 6743 |
-
}
|
| 6744 |
-
},
|
| 6745 |
"node_modules/whatwg-encoding": {
|
| 6746 |
"version": "2.0.0",
|
| 6747 |
"resolved": "https://registry.npmjs.org/whatwg-encoding/-/whatwg-encoding-2.0.0.tgz",
|
|
|
|
| 8 |
"name": "thinking-model-client",
|
| 9 |
"version": "0.1.1",
|
| 10 |
"dependencies": {
|
| 11 |
+
"@xsai/generate-text": "^0.3.3",
|
| 12 |
+
"@xsai/stream-text": "^0.3.3",
|
| 13 |
+
"@xsai/utils-reasoning": "^0.3.3",
|
| 14 |
"cors": "^2.8.5",
|
| 15 |
"express": "^4.18.2",
|
|
|
|
| 16 |
"react": "^18.2.0",
|
| 17 |
"react-dom": "^18.2.0",
|
| 18 |
"react-markdown": "^10.0.0"
|
|
|
|
| 1182 |
"vite": "^4.2.0 || ^5.0.0 || ^6.0.0"
|
| 1183 |
}
|
| 1184 |
},
|
| 1185 |
+
"node_modules/@xsai/generate-text": {
|
| 1186 |
+
"version": "0.3.3",
|
| 1187 |
+
"resolved": "https://registry.npmjs.org/@xsai/generate-text/-/generate-text-0.3.3.tgz",
|
| 1188 |
+
"integrity": "sha512-lVaUzbIgGOdsbKUm5p1ftSYteq3tMCismodY7tK+MEOlaEErmSeS+SRLZmz6X2Jn23ZuYsrj1gbGqdTciBjgHg==",
|
| 1189 |
+
"license": "MIT",
|
| 1190 |
+
"dependencies": {
|
| 1191 |
+
"@xsai/shared": "~0.3.3",
|
| 1192 |
+
"@xsai/shared-chat": "~0.3.3"
|
| 1193 |
+
}
|
| 1194 |
+
},
|
| 1195 |
+
"node_modules/@xsai/shared": {
|
| 1196 |
+
"version": "0.3.3",
|
| 1197 |
+
"resolved": "https://registry.npmjs.org/@xsai/shared/-/shared-0.3.3.tgz",
|
| 1198 |
+
"integrity": "sha512-1xul8h7He5cM+H/gGx8pdE/LLqujO3xSHM2+V4XVEjI05VxQq+s5lk42/7NUUbjcvcFxi5Ow9IL4pezHgxq9aQ==",
|
| 1199 |
+
"license": "MIT"
|
| 1200 |
+
},
|
| 1201 |
+
"node_modules/@xsai/shared-chat": {
|
| 1202 |
+
"version": "0.3.3",
|
| 1203 |
+
"resolved": "https://registry.npmjs.org/@xsai/shared-chat/-/shared-chat-0.3.3.tgz",
|
| 1204 |
+
"integrity": "sha512-zCfAlXhNfQ3+ErhP8BnSIdsELAENHubfscLdImls+9zNHfY1AzIaBuRIf5dWJYmC4/NsOop1QHYjfM+7Wg4gjw==",
|
| 1205 |
+
"license": "MIT",
|
| 1206 |
+
"dependencies": {
|
| 1207 |
+
"@xsai/shared": "~0.3.3"
|
| 1208 |
+
}
|
| 1209 |
+
},
|
| 1210 |
+
"node_modules/@xsai/stream-text": {
|
| 1211 |
+
"version": "0.3.3",
|
| 1212 |
+
"resolved": "https://registry.npmjs.org/@xsai/stream-text/-/stream-text-0.3.3.tgz",
|
| 1213 |
+
"integrity": "sha512-Y/f4EkvWIF6nJ/07RJpneAugMV6pHiDi1c0X2pn6m6NhGodpj1HZKzvYTFryW/ZYc5NCG+B7AXUQyCy8PUSOeg==",
|
| 1214 |
+
"license": "MIT",
|
| 1215 |
+
"dependencies": {
|
| 1216 |
+
"@xsai/shared-chat": "~0.3.3"
|
| 1217 |
+
}
|
| 1218 |
+
},
|
| 1219 |
+
"node_modules/@xsai/utils-reasoning": {
|
| 1220 |
+
"version": "0.3.3",
|
| 1221 |
+
"resolved": "https://registry.npmjs.org/@xsai/utils-reasoning/-/utils-reasoning-0.3.3.tgz",
|
| 1222 |
+
"integrity": "sha512-4YiikRMij9ns9I2qEZeyHrG3SSfghfxMJU4DpvNdmZznDT3OujUMHrgD3BuvWXTyuGuyAyV8ZiZ0TR+DXl/Rkg==",
|
| 1223 |
+
"license": "MIT"
|
| 1224 |
+
},
|
| 1225 |
"node_modules/accepts": {
|
| 1226 |
"version": "1.3.8",
|
| 1227 |
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz",
|
|
|
|
| 2234 |
"node": ">=0.10"
|
| 2235 |
}
|
| 2236 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2237 |
"node_modules/date-fns": {
|
| 2238 |
"version": "2.30.0",
|
| 2239 |
"resolved": "https://registry.npmjs.org/date-fns/-/date-fns-2.30.0.tgz",
|
|
|
|
| 2766 |
"pend": "~1.2.0"
|
| 2767 |
}
|
| 2768 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2769 |
"node_modules/figures": {
|
| 2770 |
"version": "3.2.0",
|
| 2771 |
"resolved": "https://registry.npmjs.org/figures/-/figures-3.2.0.tgz",
|
|
|
|
| 2883 |
"node": ">= 6"
|
| 2884 |
}
|
| 2885 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2886 |
"node_modules/forwarded": {
|
| 2887 |
"version": "0.2.0",
|
| 2888 |
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz",
|
|
|
|
| 4821 |
"node": ">= 0.6"
|
| 4822 |
}
|
| 4823 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4824 |
"node_modules/node-releases": {
|
| 4825 |
"version": "2.0.19",
|
| 4826 |
"resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.19.tgz",
|
|
|
|
| 6700 |
"node": ">=12.0.0"
|
| 6701 |
}
|
| 6702 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6703 |
"node_modules/whatwg-encoding": {
|
| 6704 |
"version": "2.0.0",
|
| 6705 |
"resolved": "https://registry.npmjs.org/whatwg-encoding/-/whatwg-encoding-2.0.0.tgz",
|
package.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
{
|
| 2 |
"name": "thinking-model-client",
|
| 3 |
"private": true,
|
| 4 |
-
"version": "0.
|
| 5 |
"type": "module",
|
| 6 |
"scripts": {
|
| 7 |
"dev": "vite",
|
|
@@ -18,9 +18,11 @@
|
|
| 18 |
"test:build": "start-server-and-test serve:build http://localhost:5173 'cypress run --spec cypress/e2e/message-routing-simple.cy.js'"
|
| 19 |
},
|
| 20 |
"dependencies": {
|
|
|
|
|
|
|
|
|
|
| 21 |
"cors": "^2.8.5",
|
| 22 |
"express": "^4.18.2",
|
| 23 |
-
"node-fetch": "^3.3.2",
|
| 24 |
"react": "^18.2.0",
|
| 25 |
"react-dom": "^18.2.0",
|
| 26 |
"react-markdown": "^10.0.0"
|
|
|
|
| 1 |
{
|
| 2 |
"name": "thinking-model-client",
|
| 3 |
"private": true,
|
| 4 |
+
"version": "0.2.0",
|
| 5 |
"type": "module",
|
| 6 |
"scripts": {
|
| 7 |
"dev": "vite",
|
|
|
|
| 18 |
"test:build": "start-server-and-test serve:build http://localhost:5173 'cypress run --spec cypress/e2e/message-routing-simple.cy.js'"
|
| 19 |
},
|
| 20 |
"dependencies": {
|
| 21 |
+
"@xsai/generate-text": "^0.3.3",
|
| 22 |
+
"@xsai/stream-text": "^0.3.3",
|
| 23 |
+
"@xsai/utils-reasoning": "^0.3.3",
|
| 24 |
"cors": "^2.8.5",
|
| 25 |
"express": "^4.18.2",
|
|
|
|
| 26 |
"react": "^18.2.0",
|
| 27 |
"react-dom": "^18.2.0",
|
| 28 |
"react-markdown": "^10.0.0"
|
server/index.js
CHANGED
|
@@ -1,8 +1,10 @@
|
|
| 1 |
import express from 'express';
|
| 2 |
import cors from 'cors';
|
| 3 |
-
import fetch from 'node-fetch';
|
| 4 |
import path from 'path'
|
| 5 |
import { fileURLToPath } from 'url'
|
|
|
|
|
|
|
|
|
|
| 6 |
|
| 7 |
const app = express();
|
| 8 |
const port = 7860;
|
|
@@ -23,44 +25,29 @@ app.post('/api/summarize', async (req, res) => {
|
|
| 23 |
});
|
| 24 |
|
| 25 |
try {
|
| 26 |
-
let
|
| 27 |
if (apiEndpoint.endsWith('#')) {
|
| 28 |
-
|
| 29 |
} else if (apiEndpoint.endsWith('/')) {
|
| 30 |
-
|
| 31 |
} else {
|
| 32 |
-
|
| 33 |
}
|
| 34 |
-
console.log('Calling API
|
| 35 |
|
| 36 |
-
const
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
content: `Summarize this conversation in 3-5 words: ${content}`
|
| 47 |
-
}],
|
| 48 |
-
temperature: 0.2,
|
| 49 |
-
max_tokens: 20
|
| 50 |
-
})
|
| 51 |
});
|
| 52 |
|
| 53 |
-
|
| 54 |
-
const errorData = await response.text();
|
| 55 |
-
console.error('API error:', {
|
| 56 |
-
status: response.status,
|
| 57 |
-
error: errorData
|
| 58 |
-
});
|
| 59 |
-
throw new Error(`API error: ${response.status} - ${errorData}`);
|
| 60 |
-
}
|
| 61 |
-
|
| 62 |
-
const data = await response.json();
|
| 63 |
-
const summary = data.choices[0].message.content.trim();
|
| 64 |
res.json({ summary });
|
| 65 |
} catch (error) {
|
| 66 |
console.error('Error:', error);
|
|
@@ -78,40 +65,26 @@ app.post('/api/chat', async (req, res) => {
|
|
| 78 |
});
|
| 79 |
|
| 80 |
try {
|
| 81 |
-
let
|
| 82 |
if (apiEndpoint.endsWith('#')) {
|
| 83 |
-
|
| 84 |
} else if (apiEndpoint.endsWith('/')) {
|
| 85 |
-
|
| 86 |
} else {
|
| 87 |
-
|
| 88 |
}
|
| 89 |
-
console.log('Calling API
|
| 90 |
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
body: JSON.stringify({
|
| 98 |
-
model: model,
|
| 99 |
-
messages: messages,
|
| 100 |
-
stream: true
|
| 101 |
-
})
|
| 102 |
});
|
| 103 |
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
if (!response.ok) {
|
| 108 |
-
const errorData = await response.text();
|
| 109 |
-
console.error('API error:', {
|
| 110 |
-
status: response.status,
|
| 111 |
-
error: errorData
|
| 112 |
-
});
|
| 113 |
-
throw new Error(`API error: ${response.status} - ${errorData}`);
|
| 114 |
-
}
|
| 115 |
|
| 116 |
// Set headers for streaming
|
| 117 |
res.setHeader('Content-Type', 'text/event-stream');
|
|
@@ -125,16 +98,71 @@ app.post('/api/chat', async (req, res) => {
|
|
| 125 |
'Connection': 'keep-alive'
|
| 126 |
});
|
| 127 |
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
|
|
|
| 134 |
});
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
} catch (error) {
|
| 136 |
console.error('Error:', error);
|
| 137 |
-
res.
|
|
|
|
|
|
|
| 138 |
}
|
| 139 |
});
|
| 140 |
|
|
|
|
| 1 |
import express from 'express';
|
| 2 |
import cors from 'cors';
|
|
|
|
| 3 |
import path from 'path'
|
| 4 |
import { fileURLToPath } from 'url'
|
| 5 |
+
import { streamText } from '@xsai/stream-text';
|
| 6 |
+
import { generateText } from '@xsai/generate-text';
|
| 7 |
+
import { extractReasoningStream } from '@xsai/utils-reasoning';
|
| 8 |
|
| 9 |
const app = express();
|
| 10 |
const port = 7860;
|
|
|
|
| 25 |
});
|
| 26 |
|
| 27 |
try {
|
| 28 |
+
let baseURL;
|
| 29 |
if (apiEndpoint.endsWith('#')) {
|
| 30 |
+
baseURL = apiEndpoint.slice(0, -1);
|
| 31 |
} else if (apiEndpoint.endsWith('/')) {
|
| 32 |
+
baseURL = `${apiEndpoint}v1`;
|
| 33 |
} else {
|
| 34 |
+
baseURL = `${apiEndpoint}/v1`;
|
| 35 |
}
|
| 36 |
+
console.log('Calling API baseURL:', baseURL);
|
| 37 |
|
| 38 |
+
const { text } = await generateText({
|
| 39 |
+
apiKey: apiKey,
|
| 40 |
+
baseURL: baseURL,
|
| 41 |
+
model: model,
|
| 42 |
+
messages: [{
|
| 43 |
+
role: 'user',
|
| 44 |
+
content: `Summarize this conversation in 3-5 words: ${content}`
|
| 45 |
+
}],
|
| 46 |
+
temperature: 0.2,
|
| 47 |
+
max_tokens: 20
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
});
|
| 49 |
|
| 50 |
+
const summary = text.trim();
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
res.json({ summary });
|
| 52 |
} catch (error) {
|
| 53 |
console.error('Error:', error);
|
|
|
|
| 65 |
});
|
| 66 |
|
| 67 |
try {
|
| 68 |
+
let baseURL;
|
| 69 |
if (apiEndpoint.endsWith('#')) {
|
| 70 |
+
baseURL = apiEndpoint.slice(0, -1);
|
| 71 |
} else if (apiEndpoint.endsWith('/')) {
|
| 72 |
+
baseURL = `${apiEndpoint}v1`;
|
| 73 |
} else {
|
| 74 |
+
baseURL = `${apiEndpoint}/v1`;
|
| 75 |
}
|
| 76 |
+
console.log('Calling API baseURL:', baseURL);
|
| 77 |
|
| 78 |
+
// Use xsai to stream text from the AI model
|
| 79 |
+
const { textStream } = await streamText({
|
| 80 |
+
apiKey: apiKey,
|
| 81 |
+
baseURL: baseURL,
|
| 82 |
+
model: model,
|
| 83 |
+
messages: messages
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
});
|
| 85 |
|
| 86 |
+
// Extract reasoning and content streams
|
| 87 |
+
const { reasoningStream, textStream: contentStream } = extractReasoningStream(textStream);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
|
| 89 |
// Set headers for streaming
|
| 90 |
res.setHeader('Content-Type', 'text/event-stream');
|
|
|
|
| 98 |
'Connection': 'keep-alive'
|
| 99 |
});
|
| 100 |
|
| 101 |
+
let reasoningText = '';
|
| 102 |
+
let contentText = '';
|
| 103 |
+
let isReasoningComplete = false;
|
| 104 |
+
|
| 105 |
+
// Handle client disconnect
|
| 106 |
+
req.on('close', () => {
|
| 107 |
+
console.log('Client disconnected');
|
| 108 |
});
|
| 109 |
+
|
| 110 |
+
try {
|
| 111 |
+
// First, collect all reasoning
|
| 112 |
+
console.log('Collecting reasoning...');
|
| 113 |
+
for await (const chunk of reasoningStream) {
|
| 114 |
+
reasoningText += chunk;
|
| 115 |
+
}
|
| 116 |
+
isReasoningComplete = true;
|
| 117 |
+
console.log('Reasoning collection complete');
|
| 118 |
+
|
| 119 |
+
// If we have reasoning, send it first wrapped in think tags
|
| 120 |
+
if (reasoningText.trim()) {
|
| 121 |
+
const thinkingChunk = {
|
| 122 |
+
choices: [{
|
| 123 |
+
delta: {
|
| 124 |
+
content: `<think>${reasoningText}</think>`
|
| 125 |
+
}
|
| 126 |
+
}]
|
| 127 |
+
};
|
| 128 |
+
res.write(`data: ${JSON.stringify(thinkingChunk)}\n\n`);
|
| 129 |
+
}
|
| 130 |
+
|
| 131 |
+
// Then stream the content
|
| 132 |
+
console.log('Starting content stream...');
|
| 133 |
+
for await (const chunk of contentStream) {
|
| 134 |
+
if (res.destroyed) break;
|
| 135 |
+
|
| 136 |
+
const responseChunk = {
|
| 137 |
+
choices: [{
|
| 138 |
+
delta: {
|
| 139 |
+
content: chunk
|
| 140 |
+
}
|
| 141 |
+
}]
|
| 142 |
+
};
|
| 143 |
+
|
| 144 |
+
res.write(`data: ${JSON.stringify(responseChunk)}\n\n`);
|
| 145 |
+
contentText += chunk;
|
| 146 |
+
}
|
| 147 |
+
|
| 148 |
+
// Send completion marker
|
| 149 |
+
res.write('data: [DONE]\n\n');
|
| 150 |
+
res.end();
|
| 151 |
+
console.log('Stream completed successfully');
|
| 152 |
+
|
| 153 |
+
} catch (streamError) {
|
| 154 |
+
console.error('Streaming error:', streamError);
|
| 155 |
+
if (!res.destroyed) {
|
| 156 |
+
res.write(`data: ${JSON.stringify({ error: streamError.message })}\n\n`);
|
| 157 |
+
res.end();
|
| 158 |
+
}
|
| 159 |
+
}
|
| 160 |
+
|
| 161 |
} catch (error) {
|
| 162 |
console.error('Error:', error);
|
| 163 |
+
if (!res.destroyed) {
|
| 164 |
+
res.status(500).json({ error: error.message });
|
| 165 |
+
}
|
| 166 |
}
|
| 167 |
});
|
| 168 |
|
test-xsai.js
ADDED
|
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env node
|
| 2 |
+
|
| 3 |
+
import { streamText } from '@xsai/stream-text';
|
| 4 |
+
import { generateText } from '@xsai/generate-text';
|
| 5 |
+
import { extractReasoningStream } from '@xsai/utils-reasoning';
|
| 6 |
+
|
| 7 |
+
// Test configuration - replace with your actual API details
|
| 8 |
+
const testConfig = {
|
| 9 |
+
baseURL: 'https://api.deepseek.com/v1', // Example - replace with your API endpoint
|
| 10 |
+
apiKey: 'your-api-key-here', // Replace with your actual API key
|
| 11 |
+
model: 'deepseek-r1' // Replace with your model
|
| 12 |
+
};
|
| 13 |
+
|
| 14 |
+
async function testGenerateText() {
|
| 15 |
+
console.log('π§ͺ Testing xsai generateText integration...\n');
|
| 16 |
+
|
| 17 |
+
try {
|
| 18 |
+
const { text } = await generateText({
|
| 19 |
+
apiKey: testConfig.apiKey,
|
| 20 |
+
baseURL: testConfig.baseURL,
|
| 21 |
+
model: testConfig.model,
|
| 22 |
+
messages: [{
|
| 23 |
+
role: 'user',
|
| 24 |
+
content: 'Summarize this in 3-5 words: Hello, how are you today? I am doing well, thanks for asking!'
|
| 25 |
+
}],
|
| 26 |
+
temperature: 0.2,
|
| 27 |
+
max_tokens: 20
|
| 28 |
+
});
|
| 29 |
+
|
| 30 |
+
console.log('β
GenerateText test successful!');
|
| 31 |
+
console.log('π Summary result:', text.trim());
|
| 32 |
+
|
| 33 |
+
} catch (error) {
|
| 34 |
+
console.error('β GenerateText test failed:', error.message);
|
| 35 |
+
}
|
| 36 |
+
}
|
| 37 |
+
|
| 38 |
+
async function testStreamText() {
|
| 39 |
+
console.log('\nπ§ͺ Testing xsai streamText with reasoning extraction...\n');
|
| 40 |
+
|
| 41 |
+
try {
|
| 42 |
+
const { textStream } = await streamText({
|
| 43 |
+
apiKey: testConfig.apiKey,
|
| 44 |
+
baseURL: testConfig.baseURL,
|
| 45 |
+
model: testConfig.model,
|
| 46 |
+
messages: [
|
| 47 |
+
{ role: 'system', content: 'You are a helpful assistant. Use <think></think> tags to show your reasoning process.' },
|
| 48 |
+
{ role: 'user', content: 'Why is the sky blue? Please think through this step by step.' }
|
| 49 |
+
]
|
| 50 |
+
});
|
| 51 |
+
|
| 52 |
+
// Extract reasoning and content streams
|
| 53 |
+
const { reasoningStream, textStream: contentStream } = extractReasoningStream(textStream);
|
| 54 |
+
|
| 55 |
+
console.log('π§ Reasoning process:');
|
| 56 |
+
console.log('='.repeat(50));
|
| 57 |
+
let reasoningText = '';
|
| 58 |
+
for await (const chunk of reasoningStream) {
|
| 59 |
+
process.stdout.write(chunk);
|
| 60 |
+
reasoningText += chunk;
|
| 61 |
+
}
|
| 62 |
+
|
| 63 |
+
console.log('\n' + '='.repeat(50));
|
| 64 |
+
console.log('\n㪠Final answer:');
|
| 65 |
+
console.log('='.repeat(50));
|
| 66 |
+
for await (const chunk of contentStream) {
|
| 67 |
+
process.stdout.write(chunk);
|
| 68 |
+
}
|
| 69 |
+
|
| 70 |
+
console.log('\n' + '='.repeat(50));
|
| 71 |
+
console.log('\nβ
StreamText test successful!');
|
| 72 |
+
|
| 73 |
+
} catch (error) {
|
| 74 |
+
console.error('β StreamText test failed:', error.message);
|
| 75 |
+
}
|
| 76 |
+
}
|
| 77 |
+
|
| 78 |
+
async function runTests() {
|
| 79 |
+
console.log('π Testing xsai integration for thinking-model-client\n');
|
| 80 |
+
|
| 81 |
+
// Check if configuration is set
|
| 82 |
+
if (testConfig.apiKey === 'your-api-key-here') {
|
| 83 |
+
console.log('β οΈ Please update the testConfig in this file with your actual API details before running tests.');
|
| 84 |
+
console.log('π Edit the testConfig object at the top of this file.');
|
| 85 |
+
return;
|
| 86 |
+
}
|
| 87 |
+
|
| 88 |
+
await testGenerateText();
|
| 89 |
+
await testStreamText();
|
| 90 |
+
|
| 91 |
+
console.log('\nπ All tests completed!');
|
| 92 |
+
}
|
| 93 |
+
|
| 94 |
+
runTests().catch(console.error);
|