oapix / public /index.html
woiceatus's picture
add a chatclient for test
2671e04
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>oapix</title>
<link rel="stylesheet" href="/styles.css">
<script type="module" src="/app.js"></script>
</head>
<body>
<main class="landing-shell">
<section class="hero-card">
<p class="eyebrow">OpenAI-Compatible Proxy</p>
<h1>Send text, images, and audio through one proxy endpoint.</h1>
<p class="hero-copy">
`oapix` forwards chat completions to your configured upstream, accepts image and audio inputs,
converts audio URLs to mp3, and exposes media output as temporary URLs.
</p>
<div class="hero-actions">
<a class="primary-link" href="/chatclient/">Open Chat Client</a>
<a class="secondary-link" href="/v1/health">Health Check</a>
</div>
</section>
<section class="info-grid">
<article class="info-card">
<h2>Endpoint</h2>
<code>POST /v1/chat/completions</code>
<p>Use the proxy exactly like a chat completions API, then add multimodal content parts as needed.</p>
</article>
<article class="info-card">
<h2>Image Input</h2>
<p>Provide `image_url.url` as an `https` URL, a data URL, or raw base64.</p>
</article>
<article class="info-card">
<h2>Audio Input</h2>
<p>Provide `input_audio.data` plus `format`, or use `input_audio.url` and let the proxy convert it.</p>
</article>
</section>
<section class="code-card">
<div class="section-head">
<p class="eyebrow">Quick Start</p>
<h2>Example request</h2>
</div>
<pre><code>{
"model": "gpt-4.1-mini",
"messages": [
{
"role": "user",
"content": [
{ "type": "text", "text": "Describe this image." },
{
"type": "image_url",
"image_url": { "url": "https://example.com/photo.jpg" }
}
]
}
],
"audio": {
"voice": "alloy",
"format": "mp3"
}
}</code></pre>
</section>
</main>
</body>
</html>