chat-bot / index.html
j4myjohn's picture
can you make a download link for everything i need to run Xwin-MLewd-13B - Follow Up Deployment
528d2d9 verified
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AI Erotic Chat Recommendations</title>
<script src="https://cdn.tailwindcss.com"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.4.0/css/all.min.css">
<style>
.gradient-bg {
background: linear-gradient(135deg, #6b21a8 0%, #1e40af 100%);
}
.model-card:hover {
transform: translateY(-5px);
box-shadow: 0 20px 25px -5px rgba(0, 0, 0, 0.1), 0 10px 10px -5px rgba(0, 0, 0, 0.04);
}
.fade-in {
animation: fadeIn 0.5s ease-in-out;
}
@keyframes fadeIn {
from { opacity: 0; }
to { opacity: 1; }
}
</style>
</head>
<body class="bg-gray-100 min-h-screen">
<div class="gradient-bg text-white py-12 px-4 sm:px-6 lg:px-8">
<div class="max-w-7xl mx-auto">
<div class="text-center">
<h1 class="text-4xl font-extrabold tracking-tight sm:text-5xl lg:text-6xl">
<span class="block">AI Erotic Chat & Roleplay</span>
<span class="block text-purple-200">Recommendations for Your PC</span>
</h1>
<p class="mt-6 max-w-3xl mx-auto text-xl text-purple-100">
Based on your powerful Windows 10 Pro system with RTX 3060 GPU
</p>
</div>
</div>
</div>
<div class="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8 py-12">
<div class="bg-white rounded-xl shadow-lg overflow-hidden mb-12 fade-in">
<div class="p-6 sm:p-8">
<div class="flex flex-col md:flex-row items-start">
<div class="flex-shrink-0 mb-6 md:mb-0 md:mr-8">
<div class="flex items-center justify-center h-16 w-16 rounded-full bg-purple-100 text-purple-600">
<i class="fas fa-desktop text-2xl"></i>
</div>
</div>
<div class="flex-1">
<h2 class="text-2xl font-bold text-gray-900 mb-4">Your System Specifications</h2>
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
<div class="bg-gray-50 p-4 rounded-lg">
<h3 class="font-medium text-gray-900"><i class="fas fa-microchip mr-2 text-purple-600"></i> Processor</h3>
<p class="mt-1 text-gray-600">Intel Core i7-7700K @ 4.20GHz (4 cores, 8 threads)</p>
</div>
<div class="bg-gray-50 p-4 rounded-lg">
<h3 class="font-medium text-gray-900"><i class="fas fa-memory mr-2 text-purple-600"></i> Memory</h3>
<p class="mt-1 text-gray-600">32GB DDR4 @ 2666MHz (2x16GB)</p>
</div>
<div class="bg-gray-50 p-4 rounded-lg">
<h3 class="font-medium text-gray-900"><i class="fas fa-hdd mr-2 text-purple-600"></i> Storage</h3>
<p class="mt-1 text-gray-600">2x 1TB SSDs (M.2 NVMe + SATA)</p>
</div>
<div class="bg-gray-50 p-4 rounded-lg">
<h3 class="font-medium text-gray-900"><i class="fas fa-gamepad mr-2 text-purple-600"></i> Graphics</h3>
<p class="mt-1 text-gray-600">NVIDIA RTX 3060 (12GB VRAM)</p>
</div>
</div>
</div>
</div>
</div>
</div>
<h2 class="text-3xl font-bold text-gray-900 mb-8 text-center">Recommended AI Models</h2>
<p class="text-lg text-gray-600 mb-12 max-w-3xl mx-auto text-center">
Your system is well-equipped to run medium-sized local AI models for erotic chat and roleplay.
Here are the best options based on your hardware:
</p>
<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-8">
<!-- Model 1 -->
<div class="bg-white rounded-xl shadow-md overflow-hidden model-card transition-all duration-300 hover:border-purple-500 border-2 border-transparent fade-in">
<div class="p-6">
<div class="flex items-center mb-4">
<div class="flex-shrink-0 bg-purple-100 p-3 rounded-lg">
<i class="fas fa-robot text-purple-600 text-2xl"></i>
</div>
<h3 class="ml-4 text-xl font-bold text-gray-900">MythoMax-L2-13B</h3>
</div>
<p class="text-gray-600 mb-4">
A 13B parameter model fine-tuned for NSFW roleplay with excellent memory retention and character consistency.
</p>
<div class="mb-4">
<span class="inline-flex items-center px-3 py-1 rounded-full text-sm font-medium bg-purple-100 text-purple-800">
<i class="fas fa-bolt mr-1"></i> Best Balance
</span>
</div>
<div class="space-y-2">
<div class="flex items-center">
<i class="fas fa-check-circle text-green-500 mr-2"></i>
<span class="text-gray-700">Runs well on your RTX 3060 (4-bit quantized)</span>
</div>
<div class="flex items-center">
<i class="fas fa-check-circle text-green-500 mr-2"></i>
<span class="text-gray-700">Good memory for long conversations</span>
</div>
<div class="flex items-center">
<i class="fas fa-check-circle text-green-500 mr-2"></i>
<span class="text-gray-700">Excellent at creative scenarios</span>
</div>
</div>
<div class="mt-6">
<a href="https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF" target="_blank" class="inline-flex items-center px-4 py-2 border border-transparent text-sm font-medium rounded-md shadow-sm text-white bg-purple-600 hover:bg-purple-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-purple-500">
Download Model
<i class="fas fa-external-link-alt ml-2"></i>
</a>
</div>
</div>
</div>
<!-- Model 2 -->
<div class="bg-white rounded-xl shadow-md overflow-hidden model-card transition-all duration-300 hover:border-purple-500 border-2 border-transparent fade-in">
<div class="p-6">
<div class="flex items-center mb-4">
<div class="flex-shrink-0 bg-purple-100 p-3 rounded-lg">
<i class="fas fa-brain text-purple-600 text-2xl"></i>
</div>
<h3 class="ml-4 text-xl font-bold text-gray-900">Noromaid-7B</h3>
</div>
<p class="text-gray-600 mb-4">
A smaller 7B model specifically trained for ERP with surprisingly good performance for its size.
</p>
<div class="mb-4">
<span class="inline-flex items-center px-3 py-1 rounded-full text-sm font-medium bg-blue-100 text-blue-800">
<i class="fas fa-tachometer-alt mr-1"></i> Fastest Option
</span>
</div>
<div class="space-y-2">
<div class="flex items-center">
<i class="fas fa-check-circle text-green-500 mr-2"></i>
<span class="text-gray-700">Very responsive on your hardware</span>
</div>
<div class="flex items-center">
<i class="fas fa-check-circle text-green-500 mr-2"></i>
<span class="text-gray-700">Lower VRAM usage allows for larger context</span>
</div>
<div class="flex items-center">
<i class="fas fa-check-circle text-green-500 mr-2"></i>
<span class="text-gray-700">Direct and expressive responses</span>
</div>
</div>
<div class="mt-6">
<a href="https://huggingface.co/TheBloke/Noromaid-7B-v0.1-GGUF" target="_blank" class="inline-flex items-center px-4 py-2 border border-transparent text-sm font-medium rounded-md shadow-sm text-white bg-purple-600 hover:bg-purple-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-purple-500">
Download Model
<i class="fas fa-external-link-alt ml-2"></i>
</a>
</div>
</div>
</div>
<!-- Model 3 -->
<div class="bg-white rounded-xl shadow-md overflow-hidden model-card transition-all duration-300 hover:border-purple-500 border-2 border-transparent fade-in">
<div class="p-6">
<div class="flex items-center mb-4">
<div class="flex-shrink-0 bg-purple-100 p-3 rounded-lg">
<i class="fas fa-gem text-purple-600 text-2xl"></i>
</div>
<h3 class="ml-4 text-xl font-bold text-gray-900">Xwin-MLewd-13B</h3>
</div>
<p class="text-gray-600 mb-4">
A 13B model with excellent NSFW capabilities and strong roleplaying skills, slightly more verbose than MythoMax.
</p>
<div class="mb-4">
<span class="inline-flex items-center px-3 py-1 rounded-full text-sm font-medium bg-yellow-100 text-yellow-800">
<i class="fas fa-star mr-1"></i> Premium Choice
</span>
</div>
<div class="space-y-2">
<div class="flex items-center">
<i class="fas fa-check-circle text-green-500 mr-2"></i>
<span class="text-gray-700">Pushes your hardware to its limits</span>
</div>
<div class="flex items-center">
<i class="fas fa-check-circle text-green-500 mr-2"></i>
<span class="text-gray-700">More detailed and descriptive responses</span>
</div>
<div class="flex items-center">
<i class="fas fa-check-circle text-green-500 mr-2"></i>
<span class="text-gray-700">Excellent at maintaining character</span>
</div>
</div>
<div class="mt-6">
<a href="https://huggingface.co/TheBloke/Xwin-MLewd-13B-V0.2-GGUF" target="_blank" class="inline-flex items-center px-4 py-2 border border-transparent text-sm font-medium rounded-md shadow-sm text-white bg-purple-600 hover:bg-purple-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-purple-500">
Download Model
<i class="fas fa-external-link-alt ml-2"></i>
</a>
</div>
</div>
</div>
</div>
<div class="mt-16 bg-white rounded-xl shadow-lg overflow-hidden fade-in" id="xwin-downloads">
<div class="p-6 sm:p-8">
<h2 class="text-2xl font-bold text-gray-900 mb-6">Complete Xwin-MLewd-13B Setup Package</h2>
<div class="space-y-6">
<div class="bg-purple-50 p-4 rounded-lg mb-6">
<p class="text-purple-800 font-medium"><i class="fas fa-info-circle mr-2"></i> For a complete ready-to-run package, see the <a href="#xwin-downloads" class="underline hover:text-purple-600">Xwin-MLewd-13B Setup Package</a> above.</p>
</div>
<div class="flex flex-col md:flex-row">
<div class="flex-shrink-0 mb-4 md:mb-0 md:mr-6">
<div class="flex items-center justify-center h-12 w-12 rounded-full bg-purple-100 text-purple-600">
<i class="fas fa-box-open text-xl"></i>
</div>
</div>
<div>
<h3 class="text-lg font-medium text-gray-900">1. Text Generation WebUI (One-Click Installer)</h3>
<p class="mt-2 text-gray-600 mb-4">
The easiest way to run Xwin-MLewd-13B locally with GPU acceleration.
</p>
<a href="https://github.com/oobabooga/text-generation-webui/releases" target="_blank" class="inline-flex items-center px-4 py-2 border border-transparent text-sm font-medium rounded-md shadow-sm text-white bg-purple-600 hover:bg-purple-700">
<i class="fas fa-download mr-2"></i> Download Installer
</a>
</div>
</div>
<div class="flex flex-col md:flex-row">
<div class="flex-shrink-0 mb-4 md:mb-0 md:mr-6">
<div class="flex items-center justify-center h-12 w-12 rounded-full bg-purple-100 text-purple-600">
<i class="fas fa-file-archive text-xl"></i>
</div>
</div>
<div>
<h3 class="text-lg font-medium text-gray-900">2. Xwin-MLewd-13B GGUF Model File</h3>
<p class="mt-2 text-gray-600 mb-4">
The 4-bit quantized version that runs best on your RTX 3060 (Q4_K_M recommended).
</p>
<a href="https://huggingface.co/TheBloke/Xwin-MLewd-13B-V0.2-GGUF/resolve/main/xwin-mlewd-13b-v0.2.Q4_K_M.gguf" target="_blank" class="inline-flex items-center px-4 py-2 border border-transparent text-sm font-medium rounded-md shadow-sm text-white bg-purple-600 hover:bg-purple-700">
<i class="fas fa-download mr-2"></i> Download Model (5.8GB)
</a>
</div>
</div>
<div class="flex flex-col md:flex-row">
<div class="flex-shrink-0 mb-4 md:mb-0 md:mr-6">
<div class="flex items-center justify-center h-12 w-12 rounded-full bg-purple-100 text-purple-600">
<i class="fas fa-cogs text-xl"></i>
</div>
</div>
<div>
<h3 class="text-lg font-medium text-gray-900">3. Required Dependencies</h3>
<p class="mt-2 text-gray-600 mb-4">
NVIDIA CUDA Toolkit and cuDNN for GPU acceleration (already included in the one-click installer).
</p>
<div class="space-x-2">
<a href="https://developer.nvidia.com/cuda-downloads" target="_blank" class="inline-flex items-center px-4 py-2 border border-transparent text-sm font-medium rounded-md shadow-sm text-white bg-blue-600 hover:bg-blue-700">
<i class="fas fa-microchip mr-2"></i> CUDA Toolkit
</a>
<a href="https://developer.nvidia.com/cudnn" target="_blank" class="inline-flex items-center px-4 py-2 border border-transparent text-sm font-medium rounded-md shadow-sm text-white bg-blue-600 hover:bg-blue-700">
<i class="fas fa-brain mr-2"></i> cuDNN
</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="mt-16 bg-white rounded-xl shadow-lg overflow-hidden fade-in">
<div class="p-6 sm:p-8">
<h2 class="text-2xl font-bold text-gray-900 mb-6">Implementation Guide</h2>
<div class="space-y-6">
<div class="flex flex-col md:flex-row">
<div class="flex-shrink-0 mb-4 md:mb-0 md:mr-6">
<div class="flex items-center justify-center h-12 w-12 rounded-full bg-purple-100 text-purple-600">
<i class="fas fa-download text-xl"></i>
</div>
</div>
<div>
<h3 class="text-lg font-medium text-gray-900">1. Software Requirements</h3>
<p class="mt-2 text-gray-600">
Install <strong>Oobabooga's Text Generation WebUI</strong> or <strong>KoboldAI</strong> to run these models locally.
Your RTX 3060 will perform best with 4-bit quantized GGUF models using llama.cpp with GPU acceleration.
</p>
</div>
</div>
<div class="flex flex-col md:flex-row">
<div class="flex-shrink-0 mb-4 md:mb-0 md:mr-6">
<div class="flex items-center justify-center h-12 w-12 rounded-full bg-purple-100 text-purple-600">
<i class="fas fa-cog text-xl"></i>
</div>
</div>
<div>
<h3 class="text-lg font-medium text-gray-900">2. Recommended Settings</h3>
<p class="mt-2 text-gray-600">
For 13B models: Use 4-bit quantization, 2048-4096 context size, and enable GPU offloading for layers (start with 20-25 layers).
For 7B models: You can use higher context sizes (up to 8192) with similar settings.
</p>
</div>
</div>
<div class="flex flex-col md:flex-row">
<div class="flex-shrink-0 mb-4 md:mb-0 md:mr-6">
<div class="flex items-center justify-center h-12 w-12 rounded-full bg-purple-100 text-purple-600">
<i class="fas fa-lightbulb text-xl"></i>
</div>
</div>
<div>
<h3 class="text-lg font-medium text-gray-900">3. Performance Tips</h3>
<p class="mt-2 text-gray-600">
Close other GPU-intensive applications while running the models. Consider adding more virtual memory if you experience crashes.
The M.2 SSD will help with faster model loading times.
</p>
</div>
</div>
</div>
</div>
</div>
<div class="mt-16 bg-purple-50 rounded-xl shadow-lg overflow-hidden fade-in">
<div class="p-6 sm:p-8">
<h2 class="text-2xl font-bold text-gray-900 mb-6">Alternative Options</h2>
<div class="grid grid-cols-1 md:grid-cols-2 gap-6">
<div class="bg-white p-6 rounded-lg shadow">
<h3 class="text-lg font-medium text-gray-900 mb-3"><i class="fas fa-cloud mr-2 text-blue-500"></i> Cloud-Based Solutions</h3>
<p class="text-gray-600">
If you want to experiment with larger models (20B+ parameters), consider cloud services like RunPod or Vast.ai where you can rent GPU power by the hour.
</p>
</div>
<div class="bg-white p-6 rounded-lg shadow">
<h3 class="text-lg font-medium text-gray-900 mb-3"><i class="fas fa-exchange-alt mr-2 text-green-500"></i> Smaller/Faster Models</h3>
<p class="text-gray-600">
For even faster responses, try 7B models like <strong>Mistral-7B-Instruct</strong> or <strong>OpenHermes-2.5-Mistral-7B</strong> with NSFW prompts.
</p>
</div>
</div>
</div>
</div>
</div>
<footer class="bg-gray-900 text-white py-12">
<div class="max-w-7xl mx-auto px-4 sm:px-6 lg:px-8">
<div class="text-center">
<h3 class="text-xl font-bold">AI Erotic Chat Recommendations</h3>
<p class="mt-4 text-gray-400">
Always ensure you're complying with local laws and regulations when using AI models.
</p>
<div class="mt-6 flex justify-center space-x-6">
<a href="#" class="text-gray-400 hover:text-white">
<i class="fab fa-github text-2xl"></i>
</a>
<a href="#" class="text-gray-400 hover:text-white">
<i class="fab fa-discord text-2xl"></i>
</a>
</div>
</div>
</div>
</footer>
<script>
// Simple animation trigger
document.addEventListener('DOMContentLoaded', function() {
const elements = document.querySelectorAll('.fade-in');
elements.forEach((el, index) => {
setTimeout(() => {
el.style.opacity = 1;
}, index * 100);
});
});
</script>
<p style="border-radius: 8px; text-align: center; font-size: 12px; color: #fff; margin-top: 16px;position: fixed; left: 8px; bottom: 8px; z-index: 10; background: rgba(0, 0, 0, 0.8); padding: 4px 8px;">Made with <img src="https://enzostvs-deepsite.hf.space/logo.svg" alt="DeepSite Logo" style="width: 16px; height: 16px; vertical-align: middle;display:inline-block;margin-right:3px;filter:brightness(0) invert(1);"><a href="https://enzostvs-deepsite.hf.space" style="color: #fff;text-decoration: underline;" target="_blank" >DeepSite</a> - 🧬 <a href="https://enzostvs-deepsite.hf.space?remix=j4myjohn/chat-bot" style="color: #fff;text-decoration: underline;" target="_blank" >Remix</a></p></body>
</html>