Spaces:
Sleeping
Sleeping
`feat: MCPollinations 1.1.3 release
Browse filesAdded configurable parameters for text generation in `respondText` tool.
Updated configuration generator with more accurate model information and tier-specific messaging.
Enhanced tool descriptions to clarify user config priority and
- CHANGELOG.md +20 -0
- README.md +61 -0
- example-mcp.json +4 -1
- generate-mcp-config.js +18 -5
- package.json +1 -1
- pollinations-mcp-server.js +3 -3
- src/services/imageSchema.js +4 -4
- src/services/textSchema.js +14 -2
- src/services/textService.js +12 -2
CHANGELOG.md
CHANGED
|
@@ -5,6 +5,26 @@ All notable changes to the MCPollinations will be documented in this file.
|
|
| 5 |
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
| 6 |
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
| 7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
## [1.1.2] - `2025-07-25`
|
| 9 |
|
| 10 |
### Added
|
|
|
|
| 5 |
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
| 6 |
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
| 7 |
|
| 8 |
+
## [1.1.3] - `2025-07-25`
|
| 9 |
+
|
| 10 |
+
### Added
|
| 11 |
+
- **Enhanced Text Generation**: Added configurable parameters for text generation
|
| 12 |
+
- `temperature` parameter (0.0-2.0) for controlling randomness in output
|
| 13 |
+
- `top_p` parameter (0.0-1.0) for controlling diversity via nucleus sampling
|
| 14 |
+
- `system` parameter for providing system prompts to guide model behavior
|
| 15 |
+
- Configuration generator now includes prompts for text generation parameters
|
| 16 |
+
- **User Configuration Priority**: Added documentation and tool descriptions emphasizing user-configured settings are used as defaults
|
| 17 |
+
- **Improved Model Guidance**: Updated tool schemas to reference listTextModels and listImageModels for current model lists
|
| 18 |
+
- **Text Generation Privacy**: Added hardcoded `private=true` parameter to text generation requests
|
| 19 |
+
|
| 20 |
+
### Changed
|
| 21 |
+
- Updated configuration generator with more accurate model information and tier-specific messaging
|
| 22 |
+
- Enhanced tool descriptions to clarify user config priority and override behavior
|
| 23 |
+
- Improved path guidance for Windows users in configuration prompts
|
| 24 |
+
|
| 25 |
+
### Fixed
|
| 26 |
+
- Added missing `private=true` parameter to text generation API requests
|
| 27 |
+
|
| 28 |
## [1.1.2] - `2025-07-25`
|
| 29 |
|
| 30 |
### Added
|
README.md
CHANGED
|
@@ -132,6 +132,23 @@ When generating your MCP configuration, you'll be prompted for optional authenti
|
|
| 132 |
|
| 133 |
Both parameters are completely optional. Leave them empty or unset to use the free tier.
|
| 134 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
## Troubleshooting
|
| 136 |
|
| 137 |
### "AbortController is not defined" Error
|
|
@@ -183,6 +200,50 @@ The MCP server provides the following tools:
|
|
| 183 |
6. `listTextModels` - Lists available models for text generation
|
| 184 |
7. `listAudioVoices` - Lists all available voices for audio generation
|
| 185 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 186 |
## Image Generation Details
|
| 187 |
|
| 188 |
### Default Behavior
|
|
|
|
| 132 |
|
| 133 |
Both parameters are completely optional. Leave them empty or unset to use the free tier.
|
| 134 |
|
| 135 |
+
## Using Your Configuration Settings
|
| 136 |
+
|
| 137 |
+
MCPollinations respects your MCP configuration settings as defaults. When you ask an AI assistant to generate content:
|
| 138 |
+
|
| 139 |
+
- **Your configured models, output directories, and parameters are used automatically**
|
| 140 |
+
- **To override**: Specifically instruct the AI to use different settings
|
| 141 |
+
- "Generate an image using the gptimage model"
|
| 142 |
+
- "Save this image to my Desktop folder"
|
| 143 |
+
- "Use a temperature of 1.2 for this text generation"
|
| 144 |
+
|
| 145 |
+
**Example Instructions:**
|
| 146 |
+
- ✅ "Generate a sunset image" → Uses your configured model and output directory
|
| 147 |
+
- ✅ "Generate a sunset image with the flux model" → Overrides model only
|
| 148 |
+
- ✅ "Generate a sunset image and save it to C:\Pictures" → Overrides output path only
|
| 149 |
+
|
| 150 |
+
This ensures your preferences are always respected unless you specifically want different settings for a particular request.
|
| 151 |
+
|
| 152 |
## Troubleshooting
|
| 153 |
|
| 154 |
### "AbortController is not defined" Error
|
|
|
|
| 200 |
6. `listTextModels` - Lists available models for text generation
|
| 201 |
7. `listAudioVoices` - Lists all available voices for audio generation
|
| 202 |
|
| 203 |
+
## Text Generation Details
|
| 204 |
+
|
| 205 |
+
### Available Parameters
|
| 206 |
+
|
| 207 |
+
The `respondText` tool supports several parameters for fine-tuning text generation:
|
| 208 |
+
|
| 209 |
+
- **`model`**: Choose from available text models (use `listTextModels` to see current options)
|
| 210 |
+
- **`temperature`** (0.0-2.0): Controls randomness in the output
|
| 211 |
+
- Lower values (0.1-0.7) = more focused and deterministic
|
| 212 |
+
- Higher values (0.8-2.0) = more creative and random
|
| 213 |
+
- **`top_p`** (0.0-1.0): Controls diversity via nucleus sampling
|
| 214 |
+
- Lower values = more focused on likely tokens
|
| 215 |
+
- Higher values = considers more token possibilities
|
| 216 |
+
- **`system`**: System prompt to guide the model's behavior and personality
|
| 217 |
+
|
| 218 |
+
### Customizing Text Generation
|
| 219 |
+
|
| 220 |
+
```javascript
|
| 221 |
+
// Example options for respondText
|
| 222 |
+
const options = {
|
| 223 |
+
model: "openai", // Model selection
|
| 224 |
+
temperature: 0.7, // Balanced creativity
|
| 225 |
+
top_p: 0.9, // High diversity
|
| 226 |
+
system: "You are a helpful assistant that explains things clearly and concisely."
|
| 227 |
+
};
|
| 228 |
+
```
|
| 229 |
+
|
| 230 |
+
### Configuration Examples
|
| 231 |
+
|
| 232 |
+
In your MCP configuration, you can set defaults:
|
| 233 |
+
|
| 234 |
+
```json
|
| 235 |
+
{
|
| 236 |
+
"default_params": {
|
| 237 |
+
"text": {
|
| 238 |
+
"model": "openai",
|
| 239 |
+
"temperature": 0.7,
|
| 240 |
+
"top_p": 0.9,
|
| 241 |
+
"system": "You are a helpful coding assistant."
|
| 242 |
+
}
|
| 243 |
+
}
|
| 244 |
+
}
|
| 245 |
+
```
|
| 246 |
+
|
| 247 |
## Image Generation Details
|
| 248 |
|
| 249 |
### Default Behavior
|
example-mcp.json
CHANGED
|
@@ -21,7 +21,10 @@
|
|
| 21 |
"enhance": true
|
| 22 |
},
|
| 23 |
"text": {
|
| 24 |
-
"model": "openai"
|
|
|
|
|
|
|
|
|
|
| 25 |
},
|
| 26 |
"audio": {
|
| 27 |
"voice": "alloy"
|
|
|
|
| 21 |
"enhance": true
|
| 22 |
},
|
| 23 |
"text": {
|
| 24 |
+
"model": "openai",
|
| 25 |
+
"temperature": 0.7,
|
| 26 |
+
"top_p": 0.9,
|
| 27 |
+
"system": ""
|
| 28 |
},
|
| 29 |
"audio": {
|
| 30 |
"voice": "alloy"
|
generate-mcp-config.js
CHANGED
|
@@ -32,7 +32,10 @@ const defaultConfig = {
|
|
| 32 |
"enhance": true
|
| 33 |
},
|
| 34 |
"text": {
|
| 35 |
-
"model": "openai"
|
|
|
|
|
|
|
|
|
|
| 36 |
},
|
| 37 |
"audio": {
|
| 38 |
"voice": "alloy"
|
|
@@ -90,6 +93,7 @@ async function generateMcpConfig() {
|
|
| 90 |
// Resources customization
|
| 91 |
console.log('\nResource Directories:');
|
| 92 |
console.log('Note: Using relative path (starting with "./") is recommended for portability.');
|
|
|
|
| 93 |
console.log('These directories will be created automatically if they don\'t exist.');
|
| 94 |
|
| 95 |
const outputDir = await prompt(`Output directory for saved files (default: "${config[configKey].resources.output_dir}"): `);
|
|
@@ -99,8 +103,8 @@ async function generateMcpConfig() {
|
|
| 99 |
|
| 100 |
// Authentication configuration
|
| 101 |
console.log('\nAuthentication Configuration (Optional):');
|
| 102 |
-
console.log('These settings are optional and provide access to more models and better rate limits.');
|
| 103 |
-
console.log('Leave empty to use the free tier.');
|
| 104 |
console.log('Note: You can also set these via environment variables POLLINATIONS_TOKEN and POLLINATIONS_REFERRER');
|
| 105 |
|
| 106 |
const authToken = await prompt('API Token (optional): ');
|
|
@@ -130,7 +134,7 @@ async function generateMcpConfig() {
|
|
| 130 |
const customizeImage = await promptYesNo('Customize image generation parameters?', false);
|
| 131 |
|
| 132 |
if (customizeImage) {
|
| 133 |
-
console.log('Available image models: "flux", "turbo"
|
| 134 |
const imageModel = await prompt('Default image model (default: "flux"): ');
|
| 135 |
if (imageModel) config[configKey].default_params.image.model = imageModel;
|
| 136 |
|
|
@@ -152,9 +156,18 @@ async function generateMcpConfig() {
|
|
| 152 |
const customizeText = await promptYesNo('Customize text generation parameters?', false);
|
| 153 |
|
| 154 |
if (customizeText) {
|
| 155 |
-
console.log('
|
| 156 |
const textModel = await prompt('Default text model (default: "openai"): ');
|
| 157 |
if (textModel) config[configKey].default_params.text.model = textModel;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 158 |
}
|
| 159 |
|
| 160 |
// Audio parameters
|
|
|
|
| 32 |
"enhance": true
|
| 33 |
},
|
| 34 |
"text": {
|
| 35 |
+
"model": "openai",
|
| 36 |
+
"temperature": 0.7,
|
| 37 |
+
"top_p": 0.9,
|
| 38 |
+
"system": ""
|
| 39 |
},
|
| 40 |
"audio": {
|
| 41 |
"voice": "alloy"
|
|
|
|
| 93 |
// Resources customization
|
| 94 |
console.log('\nResource Directories:');
|
| 95 |
console.log('Note: Using relative path (starting with "./") is recommended for portability.');
|
| 96 |
+
console.log('Absolute path is recommended on Windows or if you will not be moving the configuration file.');
|
| 97 |
console.log('These directories will be created automatically if they don\'t exist.');
|
| 98 |
|
| 99 |
const outputDir = await prompt(`Output directory for saved files (default: "${config[configKey].resources.output_dir}"): `);
|
|
|
|
| 103 |
|
| 104 |
// Authentication configuration
|
| 105 |
console.log('\nAuthentication Configuration (Optional):');
|
| 106 |
+
console.log('These settings are optional and should be used only if you are Flower or Nectar tier user. Configuring these settings will provide access to more models and better rate limits for those tiers.');
|
| 107 |
+
console.log('Leave empty to use the free (seed) tier. Note: some models may not be available without authentication.');
|
| 108 |
console.log('Note: You can also set these via environment variables POLLINATIONS_TOKEN and POLLINATIONS_REFERRER');
|
| 109 |
|
| 110 |
const authToken = await prompt('API Token (optional): ');
|
|
|
|
| 134 |
const customizeImage = await promptYesNo('Customize image generation parameters?', false);
|
| 135 |
|
| 136 |
if (customizeImage) {
|
| 137 |
+
console.log('Available image models: "flux", "turbo", "gptimage". Use the listImageModels tool to see he most recent model list');
|
| 138 |
const imageModel = await prompt('Default image model (default: "flux"): ');
|
| 139 |
if (imageModel) config[configKey].default_params.image.model = imageModel;
|
| 140 |
|
|
|
|
| 156 |
const customizeText = await promptYesNo('Customize text generation parameters?', false);
|
| 157 |
|
| 158 |
if (customizeText) {
|
| 159 |
+
console.log('Some generally available text models: "openai", "openai-large", "openai-reasoning". Model choices change frequently - use the listTextModels tool to see all models');
|
| 160 |
const textModel = await prompt('Default text model (default: "openai"): ');
|
| 161 |
if (textModel) config[configKey].default_params.text.model = textModel;
|
| 162 |
+
|
| 163 |
+
const textTemperature = await prompt('Default temperature (0.0-2.0, controls randomness, default: 0.7): ');
|
| 164 |
+
if (textTemperature) config[configKey].default_params.text.temperature = parseFloat(textTemperature);
|
| 165 |
+
|
| 166 |
+
const textTopP = await prompt('Default top_p (0.0-1.0, controls diversity, default: 0.9): ');
|
| 167 |
+
if (textTopP) config[configKey].default_params.text.top_p = parseFloat(textTopP);
|
| 168 |
+
|
| 169 |
+
const textSystem = await prompt('Default system prompt (optional, guides model behavior): ');
|
| 170 |
+
if (textSystem) config[configKey].default_params.text.system = textSystem;
|
| 171 |
}
|
| 172 |
|
| 173 |
// Audio parameters
|
package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
{
|
| 2 |
"name": "@pinkpixel/mcpollinations",
|
| 3 |
-
"version": "1.1.
|
| 4 |
"description": "Model Context Protocol (MCP) server for the Pollinations APIs with image saving functionality.",
|
| 5 |
"type": "module",
|
| 6 |
"bin": {
|
|
|
|
| 1 |
{
|
| 2 |
"name": "@pinkpixel/mcpollinations",
|
| 3 |
+
"version": "1.1.3",
|
| 4 |
"description": "Model Context Protocol (MCP) server for the Pollinations APIs with image saving functionality.",
|
| 5 |
"type": "module",
|
| 6 |
"bin": {
|
pollinations-mcp-server.js
CHANGED
|
@@ -125,7 +125,7 @@ if (finalAuthConfig) {
|
|
| 125 |
const server = new Server(
|
| 126 |
{
|
| 127 |
name: '@pinkpixel/mcpollinations',
|
| 128 |
-
version: '1.1.
|
| 129 |
},
|
| 130 |
{
|
| 131 |
capabilities: {
|
|
@@ -292,8 +292,8 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
|
| 292 |
}
|
| 293 |
} else if (name === 'respondText') {
|
| 294 |
try {
|
| 295 |
-
const { prompt, model = "openai", seed } = args;
|
| 296 |
-
const result = await respondText(prompt, model, seed, finalAuthConfig);
|
| 297 |
return {
|
| 298 |
content: [
|
| 299 |
{ type: 'text', text: result }
|
|
|
|
| 125 |
const server = new Server(
|
| 126 |
{
|
| 127 |
name: '@pinkpixel/mcpollinations',
|
| 128 |
+
version: '1.1.3',
|
| 129 |
},
|
| 130 |
{
|
| 131 |
capabilities: {
|
|
|
|
| 292 |
}
|
| 293 |
} else if (name === 'respondText') {
|
| 294 |
try {
|
| 295 |
+
const { prompt, model = "openai", seed, temperature, top_p, system } = args;
|
| 296 |
+
const result = await respondText(prompt, model, seed, temperature, top_p, system, finalAuthConfig);
|
| 297 |
return {
|
| 298 |
content: [
|
| 299 |
{ type: 'text', text: result }
|
src/services/imageSchema.js
CHANGED
|
@@ -7,7 +7,7 @@
|
|
| 7 |
*/
|
| 8 |
export const generateImageUrlSchema = {
|
| 9 |
name: 'generateImageUrl',
|
| 10 |
-
description: 'Generate an image URL from a text prompt',
|
| 11 |
inputSchema: {
|
| 12 |
type: 'object',
|
| 13 |
properties: {
|
|
@@ -17,7 +17,7 @@ export const generateImageUrlSchema = {
|
|
| 17 |
},
|
| 18 |
model: {
|
| 19 |
type: 'string',
|
| 20 |
-
description: 'Model name to use for generation (default: "flux").
|
| 21 |
},
|
| 22 |
seed: {
|
| 23 |
type: 'number',
|
|
@@ -49,7 +49,7 @@ export const generateImageUrlSchema = {
|
|
| 49 |
*/
|
| 50 |
export const generateImageSchema = {
|
| 51 |
name: 'generateImage',
|
| 52 |
-
description: 'Generate an image, return the base64-encoded data, and save to a file by default',
|
| 53 |
inputSchema: {
|
| 54 |
type: 'object',
|
| 55 |
properties: {
|
|
@@ -59,7 +59,7 @@ export const generateImageSchema = {
|
|
| 59 |
},
|
| 60 |
model: {
|
| 61 |
type: 'string',
|
| 62 |
-
description: 'Model name to use for generation (default: "flux").
|
| 63 |
},
|
| 64 |
seed: {
|
| 65 |
type: 'number',
|
|
|
|
| 7 |
*/
|
| 8 |
export const generateImageUrlSchema = {
|
| 9 |
name: 'generateImageUrl',
|
| 10 |
+
description: 'Generate an image URL from a text prompt. User-configured settings in MCP config will be used as defaults unless specifically overridden.',
|
| 11 |
inputSchema: {
|
| 12 |
type: 'object',
|
| 13 |
properties: {
|
|
|
|
| 17 |
},
|
| 18 |
model: {
|
| 19 |
type: 'string',
|
| 20 |
+
description: 'Model name to use for generation (default: user config or "flux"). Use listImageModels to see all available models'
|
| 21 |
},
|
| 22 |
seed: {
|
| 23 |
type: 'number',
|
|
|
|
| 49 |
*/
|
| 50 |
export const generateImageSchema = {
|
| 51 |
name: 'generateImage',
|
| 52 |
+
description: 'Generate an image, return the base64-encoded data, and save to a file by default. User-configured settings in MCP config will be used as defaults unless specifically overridden.',
|
| 53 |
inputSchema: {
|
| 54 |
type: 'object',
|
| 55 |
properties: {
|
|
|
|
| 59 |
},
|
| 60 |
model: {
|
| 61 |
type: 'string',
|
| 62 |
+
description: 'Model name to use for generation (default: user config or "flux"). Use listImageModels to see all available models'
|
| 63 |
},
|
| 64 |
seed: {
|
| 65 |
type: 'number',
|
src/services/textSchema.js
CHANGED
|
@@ -7,7 +7,7 @@
|
|
| 7 |
*/
|
| 8 |
export const respondTextSchema = {
|
| 9 |
name: 'respondText',
|
| 10 |
-
description: 'Respond with text to a prompt using the Pollinations Text API',
|
| 11 |
inputSchema: {
|
| 12 |
type: 'object',
|
| 13 |
properties: {
|
|
@@ -17,11 +17,23 @@ export const respondTextSchema = {
|
|
| 17 |
},
|
| 18 |
model: {
|
| 19 |
type: 'string',
|
| 20 |
-
description: 'Model to use for text generation (default:
|
| 21 |
},
|
| 22 |
seed: {
|
| 23 |
type: 'number',
|
| 24 |
description: 'Seed for reproducible results (default: random)'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
}
|
| 26 |
},
|
| 27 |
required: ['prompt']
|
|
|
|
| 7 |
*/
|
| 8 |
export const respondTextSchema = {
|
| 9 |
name: 'respondText',
|
| 10 |
+
description: 'Respond with text to a prompt using the Pollinations Text API. User-configured settings in MCP config will be used as defaults unless specifically overridden.',
|
| 11 |
inputSchema: {
|
| 12 |
type: 'object',
|
| 13 |
properties: {
|
|
|
|
| 17 |
},
|
| 18 |
model: {
|
| 19 |
type: 'string',
|
| 20 |
+
description: 'Model to use for text generation (default: user config or "openai"). Use listTextModels to see all available models'
|
| 21 |
},
|
| 22 |
seed: {
|
| 23 |
type: 'number',
|
| 24 |
description: 'Seed for reproducible results (default: random)'
|
| 25 |
+
},
|
| 26 |
+
temperature: {
|
| 27 |
+
type: 'number',
|
| 28 |
+
description: 'Controls randomness in the output (0.0 to 2.0, default: user config or model default)'
|
| 29 |
+
},
|
| 30 |
+
top_p: {
|
| 31 |
+
type: 'number',
|
| 32 |
+
description: 'Controls diversity via nucleus sampling (0.0 to 1.0, default: user config or model default)'
|
| 33 |
+
},
|
| 34 |
+
system: {
|
| 35 |
+
type: 'string',
|
| 36 |
+
description: 'System prompt to guide the model\'s behavior (default: user config or none)'
|
| 37 |
}
|
| 38 |
},
|
| 39 |
required: ['prompt']
|
src/services/textService.js
CHANGED
|
@@ -8,12 +8,16 @@
|
|
| 8 |
* Responds with text to a prompt using the Pollinations Text API
|
| 9 |
*
|
| 10 |
* @param {string} prompt - The text prompt to generate a response for
|
| 11 |
-
* @param {string} [model="openai"] - Model to use for text generation.
|
| 12 |
* @param {number} [seed] - Seed for reproducible results (default: random)
|
|
|
|
|
|
|
|
|
|
| 13 |
* @param {Object} [authConfig] - Optional authentication configuration {token, referrer}
|
| 14 |
* @returns {Promise<string>} - The generated text response
|
|
|
|
| 15 |
*/
|
| 16 |
-
export async function respondText(prompt, model = "openai", seed = Math.floor(Math.random() * 1000000), authConfig = null) {
|
| 17 |
if (!prompt || typeof prompt !== 'string') {
|
| 18 |
throw new Error('Prompt is required and must be a string');
|
| 19 |
}
|
|
@@ -22,6 +26,12 @@ export async function respondText(prompt, model = "openai", seed = Math.floor(Ma
|
|
| 22 |
const queryParams = new URLSearchParams();
|
| 23 |
if (model) queryParams.append('model', model);
|
| 24 |
if (seed !== undefined) queryParams.append('seed', seed);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
// Construct the URL
|
| 27 |
const encodedPrompt = encodeURIComponent(prompt);
|
|
|
|
| 8 |
* Responds with text to a prompt using the Pollinations Text API
|
| 9 |
*
|
| 10 |
* @param {string} prompt - The text prompt to generate a response for
|
| 11 |
+
* @param {string} [model="openai"] - Model to use for text generation. Use listTextModels to see all available models
|
| 12 |
* @param {number} [seed] - Seed for reproducible results (default: random)
|
| 13 |
+
* @param {number} [temperature] - Controls randomness in the output (0.0 to 2.0)
|
| 14 |
+
* @param {number} [top_p] - Controls diversity via nucleus sampling (0.0 to 1.0)
|
| 15 |
+
* @param {string} [system] - System prompt to guide the model's behavior
|
| 16 |
* @param {Object} [authConfig] - Optional authentication configuration {token, referrer}
|
| 17 |
* @returns {Promise<string>} - The generated text response
|
| 18 |
+
* @note Always includes private=true parameter
|
| 19 |
*/
|
| 20 |
+
export async function respondText(prompt, model = "openai", seed = Math.floor(Math.random() * 1000000), temperature = null, top_p = null, system = null, authConfig = null) {
|
| 21 |
if (!prompt || typeof prompt !== 'string') {
|
| 22 |
throw new Error('Prompt is required and must be a string');
|
| 23 |
}
|
|
|
|
| 26 |
const queryParams = new URLSearchParams();
|
| 27 |
if (model) queryParams.append('model', model);
|
| 28 |
if (seed !== undefined) queryParams.append('seed', seed);
|
| 29 |
+
if (temperature !== null) queryParams.append('temperature', temperature);
|
| 30 |
+
if (top_p !== null) queryParams.append('top_p', top_p);
|
| 31 |
+
if (system) queryParams.append('system', system);
|
| 32 |
+
|
| 33 |
+
// Always set private to true
|
| 34 |
+
queryParams.append('private', 'true');
|
| 35 |
|
| 36 |
// Construct the URL
|
| 37 |
const encodedPrompt = encodeURIComponent(prompt);
|